Test Report: QEMU_macOS 18929

                    
                      b7c7f6c35857e0c10d9dae71da379568bba5603f:2024-05-20:34549
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.98
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.07
27 TestAddons/Setup 10
28 TestCertOptions 10.23
29 TestCertExpiration 195.48
30 TestDockerFlags 10.11
31 TestForceSystemdFlag 10.3
32 TestForceSystemdEnv 10.08
38 TestErrorSpam/setup 9.8
47 TestFunctional/serial/StartWithProxy 10.17
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.95
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.19
73 TestFunctional/parallel/StatusCmd 0.16
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.28
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.27
89 TestFunctional/parallel/NodeLabels 0.05
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 115.9
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.04
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
104 TestFunctional/parallel/ServiceCmd/Format 0.04
105 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/Version/components 0.04
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.42
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.6
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.06
127 TestFunctional/parallel/DockerEnv/bash 0.04
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 39.15
141 TestMultiControlPlane/serial/StartCluster 10.11
142 TestMultiControlPlane/serial/DeployApp 84.04
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 50.01
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.94
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.77
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 9.83
165 TestJSONOutput/start/Command 9.67
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.19
197 TestMountStart/serial/StartWithMountFirst 9.95
200 TestMultiNode/serial/FreshStart2Nodes 9.88
201 TestMultiNode/serial/DeployApp2Nodes 93.09
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 58.66
209 TestMultiNode/serial/RestartKeepsNodes 8.72
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.28
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.13
217 TestPreload 9.93
219 TestScheduledStopUnix 9.88
220 TestSkaffold 11.97
223 TestRunningBinaryUpgrade 587.81
225 TestKubernetesUpgrade 17.68
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.15
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.94
241 TestStoppedBinaryUpgrade/Upgrade 574.43
243 TestPause/serial/Start 9.99
253 TestNoKubernetes/serial/StartWithK8s 9.85
254 TestNoKubernetes/serial/StartWithStopK8s 5.28
255 TestNoKubernetes/serial/Start 5.3
259 TestNoKubernetes/serial/StartNoArgs 5.34
261 TestNetworkPlugins/group/auto/Start 9.71
262 TestNetworkPlugins/group/calico/Start 9.75
263 TestNetworkPlugins/group/custom-flannel/Start 9.83
264 TestNetworkPlugins/group/false/Start 9.86
265 TestNetworkPlugins/group/kindnet/Start 9.9
266 TestNetworkPlugins/group/flannel/Start 9.87
267 TestNetworkPlugins/group/enable-default-cni/Start 9.76
268 TestNetworkPlugins/group/bridge/Start 9.84
269 TestNetworkPlugins/group/kubenet/Start 9.88
271 TestStartStop/group/old-k8s-version/serial/FirstStart 10.06
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.87
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/no-preload/serial/SecondStart 5.23
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.1
294 TestStartStop/group/embed-certs/serial/FirstStart 9.99
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 12.18
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
304 TestStartStop/group/embed-certs/serial/SecondStart 5.25
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.63
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 10.02
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.25
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-533000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-533000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.9832905s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d28a926c-9e95-4f87-ab7f-fdaf7b098917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-533000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcdb162a-39c7-4f8a-9f09-86e4397c5efb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18929"}}
	{"specversion":"1.0","id":"e00328f2-bcda-47e1-9edb-7765d2251128","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig"}}
	{"specversion":"1.0","id":"36b544e9-9347-4b80-8a9a-228baf229d0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"59b15b84-3794-4e86-9e14-1f331744db4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0701853-41f0-4e06-a3f5-93e261921bc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube"}}
	{"specversion":"1.0","id":"b847a03b-a930-48b5-ada6-f49ccd6526d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"8ce9deae-3acb-4c61-84f0-d546b69e17d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9769679c-96a2-4d45-bf52-dddc35606973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5acf4d65-8bfc-4163-8160-44a52f3b5e85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"33226efd-5fc5-456f-b7f4-0fd6ec4d7078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-533000\" primary control-plane node in \"download-only-533000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ddb9f0f-1e78-4eb7-a1a8-d62d5957c917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"98bbc67d-3918-478e-9a8d-737eb581288d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106469380 0x106469380 0x106469380 0x106469380 0x106469380 0x106469380 0x106469380] Decompressors:map[bz2:0x1400045e5b0 gz:0x1400045e5b8 tar:0x1400045e4b0 tar.bz2:0x1400045e4e0 tar.gz:0x1400045e500 tar.xz:0x1400045e540 tar.zst:0x1400045e560 tbz2:0x1400045e4e0 tgz:0x1
400045e500 txz:0x1400045e540 tzst:0x1400045e560 xz:0x1400045e5c0 zip:0x1400045e5d0 zst:0x1400045e5c8] Getters:map[file:0x1400072dc70 http:0x14000898460 https:0x140008984b0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"acf5a6ce-6579-445e-961d-e43e50a18fea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:59.158800   19519 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:59.158945   19519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:59.158948   19519 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:59.158951   19519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:59.159075   19519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	W0520 04:41:59.159156   19519 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18929-19024/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18929-19024/.minikube/config/config.json: no such file or directory
	I0520 04:41:59.160417   19519 out.go:298] Setting JSON to true
	I0520 04:41:59.176836   19519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9690,"bootTime":1716195629,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:41:59.176896   19519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:41:59.182296   19519 out.go:97] [download-only-533000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:41:59.185430   19519 out.go:169] MINIKUBE_LOCATION=18929
	I0520 04:41:59.182431   19519 notify.go:220] Checking for updates...
	W0520 04:41:59.182460   19519 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 04:41:59.193235   19519 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:41:59.196374   19519 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:41:59.199380   19519 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:41:59.203231   19519 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	W0520 04:41:59.209346   19519 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 04:41:59.209548   19519 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:41:59.212327   19519 out.go:97] Using the qemu2 driver based on user configuration
	I0520 04:41:59.212344   19519 start.go:297] selected driver: qemu2
	I0520 04:41:59.212358   19519 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:41:59.212410   19519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:41:59.215312   19519 out.go:169] Automatically selected the socket_vmnet network
	I0520 04:41:59.220569   19519 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 04:41:59.220664   19519 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:41:59.220690   19519 cni.go:84] Creating CNI manager for ""
	I0520 04:41:59.220708   19519 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:41:59.220763   19519 start.go:340] cluster config:
	{Name:download-only-533000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:59.225552   19519 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:41:59.229364   19519 out.go:97] Downloading VM boot image ...
	I0520 04:41:59.229381   19519 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso
	I0520 04:42:03.588366   19519 out.go:97] Starting "download-only-533000" primary control-plane node in "download-only-533000" cluster
	I0520 04:42:03.588396   19519 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:42:03.646497   19519 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:42:03.646524   19519 cache.go:56] Caching tarball of preloaded images
	I0520 04:42:03.647520   19519 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:42:03.650863   19519 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 04:42:03.650870   19519 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:03.727723   19519 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:42:08.983381   19519 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:08.983544   19519 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:09.680281   19519 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:42:09.680488   19519 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/download-only-533000/config.json ...
	I0520 04:42:09.680508   19519 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/download-only-533000/config.json: {Name:mkc4239b44e2dd244cc9a8aca81a5ab2bee270c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:42:09.681805   19519 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:42:09.682003   19519 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0520 04:42:10.063849   19519 out.go:169] 
	W0520 04:42:10.068876   19519 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106469380 0x106469380 0x106469380 0x106469380 0x106469380 0x106469380 0x106469380] Decompressors:map[bz2:0x1400045e5b0 gz:0x1400045e5b8 tar:0x1400045e4b0 tar.bz2:0x1400045e4e0 tar.gz:0x1400045e500 tar.xz:0x1400045e540 tar.zst:0x1400045e560 tbz2:0x1400045e4e0 tgz:0x1400045e500 txz:0x1400045e540 tzst:0x1400045e560 xz:0x1400045e5c0 zip:0x1400045e5d0 zst:0x1400045e5c8] Getters:map[file:0x1400072dc70 http:0x14000898460 https:0x140008984b0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 04:42:10.068898   19519 out_reason.go:110] 
	W0520 04:42:10.076817   19519 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:42:10.080826   19519 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-533000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-092000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-092000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.938676209s)

                                                
                                                
-- stdout --
	* [offline-docker-092000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-092000" primary control-plane node in "offline-docker-092000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-092000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:53:32.371825   21068 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:53:32.372017   21068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:32.372022   21068 out.go:304] Setting ErrFile to fd 2...
	I0520 04:53:32.372025   21068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:32.372169   21068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:53:32.373406   21068 out.go:298] Setting JSON to false
	I0520 04:53:32.391004   21068 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10383,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:53:32.391080   21068 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:53:32.396081   21068 out.go:177] * [offline-docker-092000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:53:32.403110   21068 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:53:32.403138   21068 notify.go:220] Checking for updates...
	I0520 04:53:32.410050   21068 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:53:32.412994   21068 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:53:32.416058   21068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:53:32.419103   21068 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:53:32.422037   21068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:53:32.425441   21068 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:53:32.425502   21068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:53:32.429049   21068 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:53:32.436056   21068 start.go:297] selected driver: qemu2
	I0520 04:53:32.436066   21068 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:53:32.436076   21068 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:53:32.438067   21068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:53:32.441042   21068 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:53:32.442230   21068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:53:32.442246   21068 cni.go:84] Creating CNI manager for ""
	I0520 04:53:32.442253   21068 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:53:32.442259   21068 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:53:32.442293   21068 start.go:340] cluster config:
	{Name:offline-docker-092000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:53:32.446974   21068 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:32.454063   21068 out.go:177] * Starting "offline-docker-092000" primary control-plane node in "offline-docker-092000" cluster
	I0520 04:53:32.458041   21068 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:53:32.458079   21068 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:53:32.458092   21068 cache.go:56] Caching tarball of preloaded images
	I0520 04:53:32.458163   21068 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:53:32.458168   21068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:53:32.458242   21068 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/offline-docker-092000/config.json ...
	I0520 04:53:32.458253   21068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/offline-docker-092000/config.json: {Name:mkf4394cc767e3e2bb1e4d18a0670bc22941853a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:53:32.458499   21068 start.go:360] acquireMachinesLock for offline-docker-092000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:32.458545   21068 start.go:364] duration metric: took 39.375µs to acquireMachinesLock for "offline-docker-092000"
	I0520 04:53:32.458564   21068 start.go:93] Provisioning new machine with config: &{Name:offline-docker-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:32.458591   21068 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:32.463093   21068 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:53:32.478519   21068 start.go:159] libmachine.API.Create for "offline-docker-092000" (driver="qemu2")
	I0520 04:53:32.478549   21068 client.go:168] LocalClient.Create starting
	I0520 04:53:32.478610   21068 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:32.478649   21068 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:32.478657   21068 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:32.478706   21068 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:32.478728   21068 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:32.478736   21068 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:32.479108   21068 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:32.613939   21068 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:32.872277   21068 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:32.872291   21068 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:32.872488   21068 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2
	I0520 04:53:32.886002   21068 main.go:141] libmachine: STDOUT: 
	I0520 04:53:32.886029   21068 main.go:141] libmachine: STDERR: 
	I0520 04:53:32.886100   21068 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2 +20000M
	I0520 04:53:32.898116   21068 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:32.898136   21068 main.go:141] libmachine: STDERR: 
	I0520 04:53:32.898159   21068 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2
	I0520 04:53:32.898162   21068 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:32.898194   21068 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a2:e7:56:bd:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2
	I0520 04:53:32.900232   21068 main.go:141] libmachine: STDOUT: 
	I0520 04:53:32.900253   21068 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:32.900282   21068 client.go:171] duration metric: took 421.73075ms to LocalClient.Create
	I0520 04:53:34.902374   21068 start.go:128] duration metric: took 2.443788917s to createHost
	I0520 04:53:34.902401   21068 start.go:83] releasing machines lock for "offline-docker-092000", held for 2.443862125s
	W0520 04:53:34.902435   21068 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:34.915046   21068 out.go:177] * Deleting "offline-docker-092000" in qemu2 ...
	W0520 04:53:34.926954   21068 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:34.926967   21068 start.go:728] Will try again in 5 seconds ...
	I0520 04:53:39.929184   21068 start.go:360] acquireMachinesLock for offline-docker-092000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:39.929697   21068 start.go:364] duration metric: took 364.459µs to acquireMachinesLock for "offline-docker-092000"
	I0520 04:53:39.929834   21068 start.go:93] Provisioning new machine with config: &{Name:offline-docker-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:39.930129   21068 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:39.939044   21068 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:53:39.986534   21068 start.go:159] libmachine.API.Create for "offline-docker-092000" (driver="qemu2")
	I0520 04:53:39.986594   21068 client.go:168] LocalClient.Create starting
	I0520 04:53:39.986709   21068 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:39.986776   21068 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:39.986794   21068 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:39.986884   21068 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:39.986927   21068 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:39.986939   21068 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:39.987558   21068 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:40.127374   21068 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:40.222219   21068 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:40.222224   21068 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:40.222504   21068 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2
	I0520 04:53:40.235014   21068 main.go:141] libmachine: STDOUT: 
	I0520 04:53:40.235035   21068 main.go:141] libmachine: STDERR: 
	I0520 04:53:40.235099   21068 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2 +20000M
	I0520 04:53:40.245957   21068 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:40.245986   21068 main.go:141] libmachine: STDERR: 
	I0520 04:53:40.246000   21068 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2
	I0520 04:53:40.246005   21068 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:40.246071   21068 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:88:b5:26:3e:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/offline-docker-092000/disk.qcow2
	I0520 04:53:40.247733   21068 main.go:141] libmachine: STDOUT: 
	I0520 04:53:40.247748   21068 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:40.247761   21068 client.go:171] duration metric: took 261.162625ms to LocalClient.Create
	I0520 04:53:42.248280   21068 start.go:128] duration metric: took 2.318152834s to createHost
	I0520 04:53:42.248299   21068 start.go:83] releasing machines lock for "offline-docker-092000", held for 2.318597625s
	W0520 04:53:42.248403   21068 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:42.256634   21068 out.go:177] 
	W0520 04:53:42.260732   21068 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:53:42.260744   21068 out.go:239] * 
	* 
	W0520 04:53:42.261375   21068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:53:42.271646   21068 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-092000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-05-20 04:53:42.284498 -0700 PDT m=+703.191585667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-092000 -n offline-docker-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-092000 -n offline-docker-092000: exit status 7 (29.484375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-092000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-092000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-092000
--- FAIL: TestOffline (10.07s)

                                                
                                    
x
+
TestAddons/Setup (10s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-892000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-892000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.000415958s)

                                                
                                                
-- stdout --
	* [addons-892000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-892000" primary control-plane node in "addons-892000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-892000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:42:18.437615   19627 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:42:18.437746   19627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:42:18.437749   19627 out.go:304] Setting ErrFile to fd 2...
	I0520 04:42:18.437752   19627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:42:18.437879   19627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:42:18.439026   19627 out.go:298] Setting JSON to false
	I0520 04:42:18.454970   19627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9709,"bootTime":1716195629,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:42:18.455035   19627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:42:18.458638   19627 out.go:177] * [addons-892000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:42:18.465560   19627 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:42:18.469579   19627 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:42:18.465608   19627 notify.go:220] Checking for updates...
	I0520 04:42:18.475493   19627 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:42:18.478580   19627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:42:18.481409   19627 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:42:18.484535   19627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:42:18.487696   19627 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:42:18.490411   19627 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:42:18.497487   19627 start.go:297] selected driver: qemu2
	I0520 04:42:18.497493   19627 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:42:18.497499   19627 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:42:18.499785   19627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:42:18.501107   19627 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:42:18.504676   19627 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:42:18.504692   19627 cni.go:84] Creating CNI manager for ""
	I0520 04:42:18.504699   19627 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:42:18.504703   19627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:42:18.504749   19627 start.go:340] cluster config:
	{Name:addons-892000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:42:18.509301   19627 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:42:18.517534   19627 out.go:177] * Starting "addons-892000" primary control-plane node in "addons-892000" cluster
	I0520 04:42:18.521507   19627 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:42:18.521559   19627 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:42:18.521567   19627 cache.go:56] Caching tarball of preloaded images
	I0520 04:42:18.521642   19627 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:42:18.521648   19627 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:42:18.521863   19627 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/addons-892000/config.json ...
	I0520 04:42:18.521874   19627 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/addons-892000/config.json: {Name:mk2648bf31fa9554d92e91cdb5d84df3c1b20fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:42:18.522239   19627 start.go:360] acquireMachinesLock for addons-892000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:42:18.522303   19627 start.go:364] duration metric: took 58.708µs to acquireMachinesLock for "addons-892000"
	I0520 04:42:18.522315   19627 start.go:93] Provisioning new machine with config: &{Name:addons-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:42:18.522345   19627 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:42:18.526472   19627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 04:42:18.543507   19627 start.go:159] libmachine.API.Create for "addons-892000" (driver="qemu2")
	I0520 04:42:18.543533   19627 client.go:168] LocalClient.Create starting
	I0520 04:42:18.543653   19627 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:42:18.624999   19627 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:42:18.682244   19627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:42:18.883977   19627 main.go:141] libmachine: Creating SSH key...
	I0520 04:42:18.969193   19627 main.go:141] libmachine: Creating Disk image...
	I0520 04:42:18.969202   19627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:42:18.969415   19627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2
	I0520 04:42:18.982407   19627 main.go:141] libmachine: STDOUT: 
	I0520 04:42:18.982432   19627 main.go:141] libmachine: STDERR: 
	I0520 04:42:18.982487   19627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2 +20000M
	I0520 04:42:18.993580   19627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:42:18.993600   19627 main.go:141] libmachine: STDERR: 
	I0520 04:42:18.993614   19627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2
	I0520 04:42:18.993617   19627 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:42:18.993651   19627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:78:bd:e4:d9:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2
	I0520 04:42:18.995379   19627 main.go:141] libmachine: STDOUT: 
	I0520 04:42:18.995399   19627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:42:18.995418   19627 client.go:171] duration metric: took 451.88475ms to LocalClient.Create
	I0520 04:42:20.997569   19627 start.go:128] duration metric: took 2.475231458s to createHost
	I0520 04:42:20.997627   19627 start.go:83] releasing machines lock for "addons-892000", held for 2.475336333s
	W0520 04:42:20.997719   19627 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:42:21.004957   19627 out.go:177] * Deleting "addons-892000" in qemu2 ...
	W0520 04:42:21.032010   19627 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:42:21.032038   19627 start.go:728] Will try again in 5 seconds ...
	I0520 04:42:26.034244   19627 start.go:360] acquireMachinesLock for addons-892000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:42:26.034761   19627 start.go:364] duration metric: took 385.75µs to acquireMachinesLock for "addons-892000"
	I0520 04:42:26.034869   19627 start.go:93] Provisioning new machine with config: &{Name:addons-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:42:26.035135   19627 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:42:26.043784   19627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 04:42:26.091961   19627 start.go:159] libmachine.API.Create for "addons-892000" (driver="qemu2")
	I0520 04:42:26.092010   19627 client.go:168] LocalClient.Create starting
	I0520 04:42:26.092144   19627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:42:26.092201   19627 main.go:141] libmachine: Decoding PEM data...
	I0520 04:42:26.092218   19627 main.go:141] libmachine: Parsing certificate...
	I0520 04:42:26.092305   19627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:42:26.092352   19627 main.go:141] libmachine: Decoding PEM data...
	I0520 04:42:26.092365   19627 main.go:141] libmachine: Parsing certificate...
	I0520 04:42:26.092989   19627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:42:26.232521   19627 main.go:141] libmachine: Creating SSH key...
	I0520 04:42:26.340411   19627 main.go:141] libmachine: Creating Disk image...
	I0520 04:42:26.340418   19627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:42:26.340603   19627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2
	I0520 04:42:26.353186   19627 main.go:141] libmachine: STDOUT: 
	I0520 04:42:26.353209   19627 main.go:141] libmachine: STDERR: 
	I0520 04:42:26.353268   19627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2 +20000M
	I0520 04:42:26.364107   19627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:42:26.364127   19627 main.go:141] libmachine: STDERR: 
	I0520 04:42:26.364139   19627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2
	I0520 04:42:26.364145   19627 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:42:26.364181   19627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:d0:89:78:6b:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/addons-892000/disk.qcow2
	I0520 04:42:26.365846   19627 main.go:141] libmachine: STDOUT: 
	I0520 04:42:26.365866   19627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:42:26.365884   19627 client.go:171] duration metric: took 273.870833ms to LocalClient.Create
	I0520 04:42:28.368024   19627 start.go:128] duration metric: took 2.332865s to createHost
	I0520 04:42:28.368071   19627 start.go:83] releasing machines lock for "addons-892000", held for 2.333314583s
	W0520 04:42:28.368769   19627 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:42:28.379468   19627 out.go:177] 
	W0520 04:42:28.383500   19627 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:42:28.383547   19627 out.go:239] * 
	* 
	W0520 04:42:28.386467   19627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:42:28.396488   19627 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-892000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.00s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-020000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-020000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.942403708s)

                                                
                                                
-- stdout --
	* [cert-options-020000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-020000" primary control-plane node in "cert-options-020000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-020000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-020000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-020000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-020000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-020000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.344292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-020000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-020000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-020000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-020000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-020000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-020000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.990041ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-020000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-020000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-020000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-020000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-020000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-05-20 04:54:12.70405 -0700 PDT m=+733.611357709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-020000 -n cert-options-020000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-020000 -n cert-options-020000: exit status 7 (29.627083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-020000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-020000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-020000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-558000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-558000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.08130625s)

                                                
                                                
-- stdout --
	* [cert-expiration-558000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-558000" primary control-plane node in "cert-expiration-558000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-558000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-558000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-558000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.238364667s)

                                                
                                                
-- stdout --
	* [cert-expiration-558000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-558000" primary control-plane node in "cert-expiration-558000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-558000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-558000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-558000" primary control-plane node in "cert-expiration-558000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-20 04:57:12.727608 -0700 PDT m=+913.636220626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-558000 -n cert-expiration-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-558000 -n cert-expiration-558000: exit status 7 (59.835625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-558000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-558000
--- FAIL: TestCertExpiration (195.48s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-422000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-422000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.864853583s)

                                                
                                                
-- stdout --
	* [docker-flags-422000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-422000" primary control-plane node in "docker-flags-422000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-422000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:53:52.518561   21269 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:53:52.518697   21269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:52.518700   21269 out.go:304] Setting ErrFile to fd 2...
	I0520 04:53:52.518702   21269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:52.518827   21269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:53:52.519909   21269 out.go:298] Setting JSON to false
	I0520 04:53:52.535888   21269 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10403,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:53:52.535945   21269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:53:52.541537   21269 out.go:177] * [docker-flags-422000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:53:52.553517   21269 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:53:52.548603   21269 notify.go:220] Checking for updates...
	I0520 04:53:52.559486   21269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:53:52.562547   21269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:53:52.565478   21269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:53:52.568536   21269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:53:52.571549   21269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:53:52.574785   21269 config.go:182] Loaded profile config "force-systemd-flag-223000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:53:52.574860   21269 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:53:52.574901   21269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:53:52.579465   21269 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:53:52.585401   21269 start.go:297] selected driver: qemu2
	I0520 04:53:52.585408   21269 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:53:52.585414   21269 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:53:52.587739   21269 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:53:52.590455   21269 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:53:52.593576   21269 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0520 04:53:52.593592   21269 cni.go:84] Creating CNI manager for ""
	I0520 04:53:52.593600   21269 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:53:52.593604   21269 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:53:52.593633   21269 start.go:340] cluster config:
	{Name:docker-flags-422000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:53:52.598159   21269 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:52.605545   21269 out.go:177] * Starting "docker-flags-422000" primary control-plane node in "docker-flags-422000" cluster
	I0520 04:53:52.609508   21269 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:53:52.609521   21269 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:53:52.609533   21269 cache.go:56] Caching tarball of preloaded images
	I0520 04:53:52.609588   21269 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:53:52.609594   21269 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:53:52.609645   21269 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/docker-flags-422000/config.json ...
	I0520 04:53:52.609657   21269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/docker-flags-422000/config.json: {Name:mkc26e75e06f5d161794805f36df5838354ea4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:53:52.609881   21269 start.go:360] acquireMachinesLock for docker-flags-422000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:52.609919   21269 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "docker-flags-422000"
	I0520 04:53:52.609949   21269 start.go:93] Provisioning new machine with config: &{Name:docker-flags-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:52.609983   21269 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:52.617521   21269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:53:52.636090   21269 start.go:159] libmachine.API.Create for "docker-flags-422000" (driver="qemu2")
	I0520 04:53:52.636118   21269 client.go:168] LocalClient.Create starting
	I0520 04:53:52.636182   21269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:52.636218   21269 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:52.636233   21269 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:52.636272   21269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:52.636296   21269 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:52.636306   21269 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:52.636659   21269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:52.765706   21269 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:52.819557   21269 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:52.819563   21269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:52.819744   21269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2
	I0520 04:53:52.832045   21269 main.go:141] libmachine: STDOUT: 
	I0520 04:53:52.832069   21269 main.go:141] libmachine: STDERR: 
	I0520 04:53:52.832122   21269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2 +20000M
	I0520 04:53:52.842765   21269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:52.842781   21269 main.go:141] libmachine: STDERR: 
	I0520 04:53:52.842801   21269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2
	I0520 04:53:52.842808   21269 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:52.842852   21269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:48:71:79:62:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2
	I0520 04:53:52.844491   21269 main.go:141] libmachine: STDOUT: 
	I0520 04:53:52.844508   21269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:52.844526   21269 client.go:171] duration metric: took 208.403625ms to LocalClient.Create
	I0520 04:53:54.846712   21269 start.go:128] duration metric: took 2.236728792s to createHost
	I0520 04:53:54.846813   21269 start.go:83] releasing machines lock for "docker-flags-422000", held for 2.236865791s
	W0520 04:53:54.846879   21269 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:54.862852   21269 out.go:177] * Deleting "docker-flags-422000" in qemu2 ...
	W0520 04:53:54.883091   21269 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:54.883116   21269 start.go:728] Will try again in 5 seconds ...
	I0520 04:53:59.885320   21269 start.go:360] acquireMachinesLock for docker-flags-422000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:54:00.015013   21269 start.go:364] duration metric: took 129.514584ms to acquireMachinesLock for "docker-flags-422000"
	I0520 04:54:00.015150   21269 start.go:93] Provisioning new machine with config: &{Name:docker-flags-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:54:00.015404   21269 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:54:00.028016   21269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:54:00.076987   21269 start.go:159] libmachine.API.Create for "docker-flags-422000" (driver="qemu2")
	I0520 04:54:00.077038   21269 client.go:168] LocalClient.Create starting
	I0520 04:54:00.077190   21269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:54:00.077249   21269 main.go:141] libmachine: Decoding PEM data...
	I0520 04:54:00.077264   21269 main.go:141] libmachine: Parsing certificate...
	I0520 04:54:00.077328   21269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:54:00.077376   21269 main.go:141] libmachine: Decoding PEM data...
	I0520 04:54:00.077390   21269 main.go:141] libmachine: Parsing certificate...
	I0520 04:54:00.077897   21269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:54:00.235992   21269 main.go:141] libmachine: Creating SSH key...
	I0520 04:54:00.284348   21269 main.go:141] libmachine: Creating Disk image...
	I0520 04:54:00.284353   21269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:54:00.284512   21269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2
	I0520 04:54:00.297108   21269 main.go:141] libmachine: STDOUT: 
	I0520 04:54:00.297129   21269 main.go:141] libmachine: STDERR: 
	I0520 04:54:00.297182   21269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2 +20000M
	I0520 04:54:00.308129   21269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:54:00.308146   21269 main.go:141] libmachine: STDERR: 
	I0520 04:54:00.308157   21269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2
	I0520 04:54:00.308160   21269 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:54:00.308212   21269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:17:7b:2f:e0:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/docker-flags-422000/disk.qcow2
	I0520 04:54:00.309860   21269 main.go:141] libmachine: STDOUT: 
	I0520 04:54:00.309875   21269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:54:00.309888   21269 client.go:171] duration metric: took 232.845375ms to LocalClient.Create
	I0520 04:54:02.312043   21269 start.go:128] duration metric: took 2.296601459s to createHost
	I0520 04:54:02.312110   21269 start.go:83] releasing machines lock for "docker-flags-422000", held for 2.297060125s
	W0520 04:54:02.312534   21269 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-422000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-422000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:54:02.326163   21269 out.go:177] 
	W0520 04:54:02.329182   21269 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:54:02.329204   21269 out.go:239] * 
	* 
	W0520 04:54:02.331701   21269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:54:02.342099   21269 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-422000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-422000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-422000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.540167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-422000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-422000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-422000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-422000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-422000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-422000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-422000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-422000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-422000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.701625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-422000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-422000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-422000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-422000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-422000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-422000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-05-20 04:54:02.47994 -0700 PDT m=+723.387173501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-422000 -n docker-flags-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-422000 -n docker-flags-422000: exit status 7 (28.000917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-422000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-422000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-422000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-223000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-223000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.088006375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-223000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-223000" primary control-plane node in "force-systemd-flag-223000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-223000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:53:47.134121   21243 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:53:47.134249   21243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:47.134253   21243 out.go:304] Setting ErrFile to fd 2...
	I0520 04:53:47.134255   21243 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:47.134384   21243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:53:47.135449   21243 out.go:298] Setting JSON to false
	I0520 04:53:47.151519   21243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10398,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:53:47.151580   21243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:53:47.157283   21243 out.go:177] * [force-systemd-flag-223000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:53:47.163287   21243 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:53:47.168211   21243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:53:47.163315   21243 notify.go:220] Checking for updates...
	I0520 04:53:47.172643   21243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:53:47.176241   21243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:53:47.179247   21243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:53:47.182243   21243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:53:47.185878   21243 config.go:182] Loaded profile config "force-systemd-env-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:53:47.185948   21243 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:53:47.186003   21243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:53:47.190215   21243 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:53:47.197137   21243 start.go:297] selected driver: qemu2
	I0520 04:53:47.197143   21243 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:53:47.197148   21243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:53:47.199345   21243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:53:47.202230   21243 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:53:47.205327   21243 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:53:47.205348   21243 cni.go:84] Creating CNI manager for ""
	I0520 04:53:47.205355   21243 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:53:47.205360   21243 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:53:47.205408   21243 start.go:340] cluster config:
	{Name:force-systemd-flag-223000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:53:47.209781   21243 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:47.217239   21243 out.go:177] * Starting "force-systemd-flag-223000" primary control-plane node in "force-systemd-flag-223000" cluster
	I0520 04:53:47.220182   21243 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:53:47.220201   21243 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:53:47.220212   21243 cache.go:56] Caching tarball of preloaded images
	I0520 04:53:47.220270   21243 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:53:47.220275   21243 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:53:47.220329   21243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/force-systemd-flag-223000/config.json ...
	I0520 04:53:47.220340   21243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/force-systemd-flag-223000/config.json: {Name:mkdb2426ee2c5ac0953b03289a36eb0b6a69c136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:53:47.220549   21243 start.go:360] acquireMachinesLock for force-systemd-flag-223000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:47.220583   21243 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "force-systemd-flag-223000"
	I0520 04:53:47.220597   21243 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:47.220621   21243 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:47.225250   21243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:53:47.242204   21243 start.go:159] libmachine.API.Create for "force-systemd-flag-223000" (driver="qemu2")
	I0520 04:53:47.242243   21243 client.go:168] LocalClient.Create starting
	I0520 04:53:47.242299   21243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:47.242329   21243 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:47.242338   21243 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:47.242374   21243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:47.242396   21243 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:47.242406   21243 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:47.242770   21243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:47.370381   21243 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:47.443973   21243 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:47.443982   21243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:47.444138   21243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2
	I0520 04:53:47.456716   21243 main.go:141] libmachine: STDOUT: 
	I0520 04:53:47.456737   21243 main.go:141] libmachine: STDERR: 
	I0520 04:53:47.456790   21243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2 +20000M
	I0520 04:53:47.467648   21243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:47.467672   21243 main.go:141] libmachine: STDERR: 
	I0520 04:53:47.467685   21243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2
	I0520 04:53:47.467690   21243 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:47.467721   21243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:ad:b9:c4:62:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2
	I0520 04:53:47.469514   21243 main.go:141] libmachine: STDOUT: 
	I0520 04:53:47.469529   21243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:47.469551   21243 client.go:171] duration metric: took 227.305ms to LocalClient.Create
	I0520 04:53:49.471765   21243 start.go:128] duration metric: took 2.251123208s to createHost
	I0520 04:53:49.471828   21243 start.go:83] releasing machines lock for "force-systemd-flag-223000", held for 2.251251458s
	W0520 04:53:49.471890   21243 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:49.484108   21243 out.go:177] * Deleting "force-systemd-flag-223000" in qemu2 ...
	W0520 04:53:49.509248   21243 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:49.509274   21243 start.go:728] Will try again in 5 seconds ...
	I0520 04:53:54.511534   21243 start.go:360] acquireMachinesLock for force-systemd-flag-223000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:54.846929   21243 start.go:364] duration metric: took 335.284583ms to acquireMachinesLock for "force-systemd-flag-223000"
	I0520 04:53:54.847095   21243 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-223000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-223000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:54.847315   21243 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:54.855071   21243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:53:54.903061   21243 start.go:159] libmachine.API.Create for "force-systemd-flag-223000" (driver="qemu2")
	I0520 04:53:54.903118   21243 client.go:168] LocalClient.Create starting
	I0520 04:53:54.903275   21243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:54.903350   21243 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:54.903368   21243 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:54.903424   21243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:54.903467   21243 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:54.903479   21243 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:54.904084   21243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:55.050473   21243 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:55.115235   21243 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:55.115240   21243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:55.115440   21243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2
	I0520 04:53:55.127930   21243 main.go:141] libmachine: STDOUT: 
	I0520 04:53:55.127953   21243 main.go:141] libmachine: STDERR: 
	I0520 04:53:55.128019   21243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2 +20000M
	I0520 04:53:55.146576   21243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:55.146606   21243 main.go:141] libmachine: STDERR: 
	I0520 04:53:55.146618   21243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2
	I0520 04:53:55.146623   21243 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:55.146665   21243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:fe:0f:4c:c0:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-flag-223000/disk.qcow2
	I0520 04:53:55.148406   21243 main.go:141] libmachine: STDOUT: 
	I0520 04:53:55.148419   21243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:55.148432   21243 client.go:171] duration metric: took 245.309208ms to LocalClient.Create
	I0520 04:53:57.150675   21243 start.go:128] duration metric: took 2.303321791s to createHost
	I0520 04:53:57.150755   21243 start.go:83] releasing machines lock for "force-systemd-flag-223000", held for 2.303782792s
	W0520 04:53:57.151112   21243 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-223000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-223000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:57.165721   21243 out.go:177] 
	W0520 04:53:57.169730   21243 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:53:57.169783   21243 out.go:239] * 
	* 
	W0520 04:53:57.172462   21243 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:53:57.181711   21243 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-223000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-223000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-223000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.392083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-223000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-223000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-223000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-05-20 04:53:57.275639 -0700 PDT m=+718.182834501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-223000 -n force-systemd-flag-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-223000 -n force-systemd-flag-223000: exit status 7 (33.254208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-223000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-223000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-223000
--- FAIL: TestForceSystemdFlag (10.30s)

                                                
                                    
x
+
TestForceSystemdEnv (10.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-420000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-420000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.869068959s)

                                                
                                                
-- stdout --
	* [force-systemd-env-420000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-420000" primary control-plane node in "force-systemd-env-420000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-420000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:53:42.440506   21223 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:53:42.440640   21223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:42.440643   21223 out.go:304] Setting ErrFile to fd 2...
	I0520 04:53:42.440646   21223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:42.440805   21223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:53:42.441887   21223 out.go:298] Setting JSON to false
	I0520 04:53:42.458502   21223 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10393,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:53:42.458565   21223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:53:42.464711   21223 out.go:177] * [force-systemd-env-420000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:53:42.469653   21223 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:53:42.473667   21223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:53:42.469705   21223 notify.go:220] Checking for updates...
	I0520 04:53:42.479611   21223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:53:42.482654   21223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:53:42.484015   21223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:53:42.486646   21223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0520 04:53:42.489968   21223 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:53:42.490020   21223 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:53:42.494505   21223 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:53:42.501643   21223 start.go:297] selected driver: qemu2
	I0520 04:53:42.501649   21223 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:53:42.501655   21223 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:53:42.504021   21223 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:53:42.506650   21223 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:53:42.509756   21223 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:53:42.509767   21223 cni.go:84] Creating CNI manager for ""
	I0520 04:53:42.509774   21223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:53:42.509776   21223 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:53:42.509802   21223 start.go:340] cluster config:
	{Name:force-systemd-env-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:53:42.514497   21223 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:42.520551   21223 out.go:177] * Starting "force-systemd-env-420000" primary control-plane node in "force-systemd-env-420000" cluster
	I0520 04:53:42.524600   21223 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:53:42.524612   21223 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:53:42.524617   21223 cache.go:56] Caching tarball of preloaded images
	I0520 04:53:42.524666   21223 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:53:42.524670   21223 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:53:42.524717   21223 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/force-systemd-env-420000/config.json ...
	I0520 04:53:42.524726   21223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/force-systemd-env-420000/config.json: {Name:mk10c21a99ed487eaa66f74a29800ee479751e26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:53:42.524922   21223 start.go:360] acquireMachinesLock for force-systemd-env-420000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:42.524952   21223 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "force-systemd-env-420000"
	I0520 04:53:42.524964   21223 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:42.524990   21223 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:42.532622   21223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:53:42.547581   21223 start.go:159] libmachine.API.Create for "force-systemd-env-420000" (driver="qemu2")
	I0520 04:53:42.547613   21223 client.go:168] LocalClient.Create starting
	I0520 04:53:42.547672   21223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:42.547707   21223 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:42.547717   21223 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:42.547761   21223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:42.547782   21223 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:42.547793   21223 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:42.548195   21223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:42.673162   21223 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:42.833856   21223 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:42.833869   21223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:42.834111   21223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2
	I0520 04:53:42.847110   21223 main.go:141] libmachine: STDOUT: 
	I0520 04:53:42.847132   21223 main.go:141] libmachine: STDERR: 
	I0520 04:53:42.847191   21223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2 +20000M
	I0520 04:53:42.858469   21223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:42.858492   21223 main.go:141] libmachine: STDERR: 
	I0520 04:53:42.858515   21223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2
	I0520 04:53:42.858520   21223 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:42.858553   21223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:9f:46:7f:67:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2
	I0520 04:53:42.860330   21223 main.go:141] libmachine: STDOUT: 
	I0520 04:53:42.860348   21223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:42.860369   21223 client.go:171] duration metric: took 312.751959ms to LocalClient.Create
	I0520 04:53:44.862667   21223 start.go:128] duration metric: took 2.337562209s to createHost
	I0520 04:53:44.862756   21223 start.go:83] releasing machines lock for "force-systemd-env-420000", held for 2.337810958s
	W0520 04:53:44.862814   21223 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:44.874067   21223 out.go:177] * Deleting "force-systemd-env-420000" in qemu2 ...
	W0520 04:53:44.900388   21223 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:44.900419   21223 start.go:728] Will try again in 5 seconds ...
	I0520 04:53:49.902543   21223 start.go:360] acquireMachinesLock for force-systemd-env-420000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:49.903028   21223 start.go:364] duration metric: took 365.292µs to acquireMachinesLock for "force-systemd-env-420000"
	I0520 04:53:49.903171   21223 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:49.903409   21223 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:49.912875   21223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:53:49.962592   21223 start.go:159] libmachine.API.Create for "force-systemd-env-420000" (driver="qemu2")
	I0520 04:53:49.962643   21223 client.go:168] LocalClient.Create starting
	I0520 04:53:49.962763   21223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:49.962826   21223 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:49.962845   21223 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:49.962917   21223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:49.962960   21223 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:49.963030   21223 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:49.964100   21223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:50.111220   21223 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:50.212814   21223 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:50.212823   21223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:50.213105   21223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2
	I0520 04:53:50.225804   21223 main.go:141] libmachine: STDOUT: 
	I0520 04:53:50.225827   21223 main.go:141] libmachine: STDERR: 
	I0520 04:53:50.225885   21223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2 +20000M
	I0520 04:53:50.236760   21223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:50.236781   21223 main.go:141] libmachine: STDERR: 
	I0520 04:53:50.236797   21223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2
	I0520 04:53:50.236802   21223 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:50.236846   21223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:93:d4:2c:42:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/force-systemd-env-420000/disk.qcow2
	I0520 04:53:50.238575   21223 main.go:141] libmachine: STDOUT: 
	I0520 04:53:50.238595   21223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:50.238608   21223 client.go:171] duration metric: took 275.959958ms to LocalClient.Create
	I0520 04:53:52.240795   21223 start.go:128] duration metric: took 2.337338958s to createHost
	I0520 04:53:52.240877   21223 start.go:83] releasing machines lock for "force-systemd-env-420000", held for 2.337839625s
	W0520 04:53:52.241180   21223 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:52.250716   21223 out.go:177] 
	W0520 04:53:52.254896   21223 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:53:52.254925   21223 out.go:239] * 
	* 
	W0520 04:53:52.257389   21223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:53:52.266745   21223 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-420000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-420000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-420000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.114167ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-420000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-420000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-420000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-05-20 04:53:52.363962 -0700 PDT m=+713.271122834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-420000 -n force-systemd-env-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-420000 -n force-systemd-env-420000: exit status 7 (34.811375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-420000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-420000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-420000
--- FAIL: TestForceSystemdEnv (10.08s)

                                                
                                    
x
+
TestErrorSpam/setup (9.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-804000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-804000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 --driver=qemu2 : exit status 80 (9.802387875s)

                                                
                                                
-- stdout --
	* [nospam-804000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-804000" primary control-plane node in "nospam-804000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-804000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-804000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-804000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-804000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18929
- KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-804000" primary control-plane node in "nospam-804000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-804000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.80s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-832000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-832000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (10.093883334s)

                                                
                                                
-- stdout --
	* [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-832000" primary control-plane node in "functional-832000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-832000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53753 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53753 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53753 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-832000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18929
- KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-832000" primary control-plane node in "functional-832000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-832000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:53753 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:53753 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:53753 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (74.728042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.17s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-832000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-832000 --alsologtostderr -v=8: exit status 80 (5.178441625s)

                                                
                                                
-- stdout --
	* [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-832000" primary control-plane node in "functional-832000" cluster
	* Restarting existing qemu2 VM for "functional-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:42:57.940752   19770 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:42:57.940951   19770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:42:57.940954   19770 out.go:304] Setting ErrFile to fd 2...
	I0520 04:42:57.940956   19770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:42:57.941088   19770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:42:57.942085   19770 out.go:298] Setting JSON to false
	I0520 04:42:57.958029   19770 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9748,"bootTime":1716195629,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:42:57.958091   19770 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:42:57.962728   19770 out.go:177] * [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:42:57.969756   19770 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:42:57.969808   19770 notify.go:220] Checking for updates...
	I0520 04:42:57.973656   19770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:42:57.976677   19770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:42:57.979669   19770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:42:57.982720   19770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:42:57.985684   19770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:42:57.988978   19770 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:42:57.989035   19770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:42:57.993601   19770 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:42:58.000684   19770 start.go:297] selected driver: qemu2
	I0520 04:42:58.000693   19770 start.go:901] validating driver "qemu2" against &{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:42:58.000766   19770 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:42:58.002962   19770 cni.go:84] Creating CNI manager for ""
	I0520 04:42:58.002981   19770 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:42:58.003028   19770 start.go:340] cluster config:
	{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:42:58.007440   19770 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:42:58.015678   19770 out.go:177] * Starting "functional-832000" primary control-plane node in "functional-832000" cluster
	I0520 04:42:58.018636   19770 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:42:58.018649   19770 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:42:58.018660   19770 cache.go:56] Caching tarball of preloaded images
	I0520 04:42:58.018708   19770 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:42:58.018714   19770 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:42:58.018760   19770 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/functional-832000/config.json ...
	I0520 04:42:58.019144   19770 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:42:58.019178   19770 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "functional-832000"
	I0520 04:42:58.019188   19770 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:42:58.019194   19770 fix.go:54] fixHost starting: 
	I0520 04:42:58.019308   19770 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
	W0520 04:42:58.019316   19770 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:42:58.027478   19770 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
	I0520 04:42:58.031646   19770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
	I0520 04:42:58.033740   19770 main.go:141] libmachine: STDOUT: 
	I0520 04:42:58.033759   19770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:42:58.033786   19770 fix.go:56] duration metric: took 14.592625ms for fixHost
	I0520 04:42:58.033790   19770 start.go:83] releasing machines lock for "functional-832000", held for 14.608041ms
	W0520 04:42:58.033796   19770 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:42:58.033820   19770 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:42:58.033825   19770 start.go:728] Will try again in 5 seconds ...
	I0520 04:43:03.035959   19770 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:43:03.036332   19770 start.go:364] duration metric: took 275.916µs to acquireMachinesLock for "functional-832000"
	I0520 04:43:03.036465   19770 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:43:03.036485   19770 fix.go:54] fixHost starting: 
	I0520 04:43:03.037210   19770 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
	W0520 04:43:03.037236   19770 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:43:03.040726   19770 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
	I0520 04:43:03.044872   19770 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
	I0520 04:43:03.053804   19770 main.go:141] libmachine: STDOUT: 
	I0520 04:43:03.053868   19770 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:43:03.053932   19770 fix.go:56] duration metric: took 17.448625ms for fixHost
	I0520 04:43:03.053948   19770 start.go:83] releasing machines lock for "functional-832000", held for 17.597625ms
	W0520 04:43:03.054182   19770 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:43:03.061718   19770 out.go:177] 
	W0520 04:43:03.064734   19770 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:43:03.064757   19770 out.go:239] * 
	* 
	W0520 04:43:03.067453   19770 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:43:03.075644   19770 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-832000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.180299709s for "functional-832000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (67.124208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.386833ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-832000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (29.471333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-832000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-832000 get po -A: exit status 1 (25.976833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-832000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-832000\n"*: args "kubectl --context functional-832000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-832000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (29.467958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl images: exit status 83 (41.819583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.981833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-832000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (45.948959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.895958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-832000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 kubectl -- --context functional-832000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 kubectl -- --context functional-832000 get pods: exit status 1 (608.322709ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-832000
	* no server found for cluster "functional-832000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-832000 kubectl -- --context functional-832000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (31.070041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-832000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-832000 get pods: exit status 1 (918.731166ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-832000
	* no server found for cluster "functional-832000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-832000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (28.898042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.95s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-832000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-832000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.178251083s)

                                                
                                                
-- stdout --
	* [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-832000" primary control-plane node in "functional-832000" cluster
	* Restarting existing qemu2 VM for "functional-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-832000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.178822334s for "functional-832000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (68.639209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-832000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-832000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.758ms)

                                                
                                                
** stderr ** 
	error: context "functional-832000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-832000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (29.226375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 logs: exit status 83 (74.685459ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:41 PDT |                     |
	|         | -p download-only-533000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| delete  | -p download-only-533000                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| start   | -o=json --download-only                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | -p download-only-341000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| delete  | -p download-only-341000                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| delete  | -p download-only-533000                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| delete  | -p download-only-341000                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| start   | --download-only -p                                                       | binary-mirror-746000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | binary-mirror-746000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:53721                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-746000                                                  | binary-mirror-746000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| addons  | enable dashboard -p                                                      | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | addons-892000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | addons-892000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-892000 --wait=true                                             | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-892000                                                         | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| start   | -p nospam-804000 -n=1 --memory=2250 --wait=false                         | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-804000                                                         | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | minikube-local-cache-test:functional-832000                              |                      |         |         |                     |                     |
	| cache   | functional-832000 cache delete                                           | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | minikube-local-cache-test:functional-832000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	| ssh     | functional-832000 ssh sudo                                               | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-832000                                                        | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-832000 ssh                                                    | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-832000 cache reload                                           | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	| ssh     | functional-832000 ssh                                                    | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-832000 kubectl --                                             | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
	|         | --context functional-832000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:43:08
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:43:08.307248   19848 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:43:08.307384   19848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:43:08.307385   19848 out.go:304] Setting ErrFile to fd 2...
	I0520 04:43:08.307387   19848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:43:08.307488   19848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:43:08.308470   19848 out.go:298] Setting JSON to false
	I0520 04:43:08.324354   19848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9759,"bootTime":1716195629,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:43:08.324468   19848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:43:08.328920   19848 out.go:177] * [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:43:08.337621   19848 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:43:08.342605   19848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:43:08.337678   19848 notify.go:220] Checking for updates...
	I0520 04:43:08.346548   19848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:43:08.349572   19848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:43:08.352547   19848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:43:08.355613   19848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:43:08.357184   19848 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:43:08.357242   19848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:43:08.361544   19848 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:43:08.368352   19848 start.go:297] selected driver: qemu2
	I0520 04:43:08.368356   19848 start.go:901] validating driver "qemu2" against &{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:43:08.368405   19848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:43:08.370672   19848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:43:08.370695   19848 cni.go:84] Creating CNI manager for ""
	I0520 04:43:08.370701   19848 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:43:08.370748   19848 start.go:340] cluster config:
	{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:43:08.374993   19848 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:43:08.383542   19848 out.go:177] * Starting "functional-832000" primary control-plane node in "functional-832000" cluster
	I0520 04:43:08.387531   19848 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:43:08.387544   19848 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:43:08.387553   19848 cache.go:56] Caching tarball of preloaded images
	I0520 04:43:08.387605   19848 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:43:08.387614   19848 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:43:08.387687   19848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/functional-832000/config.json ...
	I0520 04:43:08.388094   19848 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:43:08.388128   19848 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "functional-832000"
	I0520 04:43:08.388136   19848 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:43:08.388142   19848 fix.go:54] fixHost starting: 
	I0520 04:43:08.388254   19848 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
	W0520 04:43:08.388261   19848 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:43:08.391675   19848 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
	I0520 04:43:08.399599   19848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
	I0520 04:43:08.401658   19848 main.go:141] libmachine: STDOUT: 
	I0520 04:43:08.401676   19848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:43:08.401702   19848 fix.go:56] duration metric: took 13.561333ms for fixHost
	I0520 04:43:08.401704   19848 start.go:83] releasing machines lock for "functional-832000", held for 13.574167ms
	W0520 04:43:08.401709   19848 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:43:08.401736   19848 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:43:08.401741   19848 start.go:728] Will try again in 5 seconds ...
	I0520 04:43:13.403825   19848 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:43:13.404161   19848 start.go:364] duration metric: took 292.917µs to acquireMachinesLock for "functional-832000"
	I0520 04:43:13.404362   19848 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:43:13.404376   19848 fix.go:54] fixHost starting: 
	I0520 04:43:13.405143   19848 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
	W0520 04:43:13.405162   19848 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:43:13.413543   19848 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
	I0520 04:43:13.417621   19848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
	I0520 04:43:13.426325   19848 main.go:141] libmachine: STDOUT: 
	I0520 04:43:13.426379   19848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:43:13.426440   19848 fix.go:56] duration metric: took 22.069459ms for fixHost
	I0520 04:43:13.426451   19848 start.go:83] releasing machines lock for "functional-832000", held for 22.210291ms
	W0520 04:43:13.426626   19848 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:43:13.433610   19848 out.go:177] 
	W0520 04:43:13.437597   19848 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:43:13.437617   19848 out.go:239] * 
	W0520 04:43:13.440401   19848 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:43:13.445502   19848 out.go:177] 
	
	
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-832000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:41 PDT |                     |
|         | -p download-only-533000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-533000                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | -o=json --download-only                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | -p download-only-341000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-341000                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-533000                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-341000                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | --download-only -p                                                       | binary-mirror-746000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | binary-mirror-746000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53721                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-746000                                                  | binary-mirror-746000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| addons  | enable dashboard -p                                                      | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | addons-892000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | addons-892000                                                            |                      |         |         |                     |                     |
| start   | -p addons-892000 --wait=true                                             | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-892000                                                         | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | -p nospam-804000 -n=1 --memory=2250 --wait=false                         | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-804000                                                         | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | minikube-local-cache-test:functional-832000                              |                      |         |         |                     |                     |
| cache   | functional-832000 cache delete                                           | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | minikube-local-cache-test:functional-832000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
| ssh     | functional-832000 ssh sudo                                               | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-832000                                                        | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-832000 ssh                                                    | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-832000 cache reload                                           | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
| ssh     | functional-832000 ssh                                                    | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-832000 kubectl --                                             | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | --context functional-832000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/20 04:43:08
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0520 04:43:08.307248   19848 out.go:291] Setting OutFile to fd 1 ...
I0520 04:43:08.307384   19848 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:08.307385   19848 out.go:304] Setting ErrFile to fd 2...
I0520 04:43:08.307387   19848 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:08.307488   19848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:43:08.308470   19848 out.go:298] Setting JSON to false
I0520 04:43:08.324354   19848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9759,"bootTime":1716195629,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0520 04:43:08.324468   19848 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0520 04:43:08.328920   19848 out.go:177] * [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0520 04:43:08.337621   19848 out.go:177]   - MINIKUBE_LOCATION=18929
I0520 04:43:08.342605   19848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
I0520 04:43:08.337678   19848 notify.go:220] Checking for updates...
I0520 04:43:08.346548   19848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0520 04:43:08.349572   19848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0520 04:43:08.352547   19848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
I0520 04:43:08.355613   19848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0520 04:43:08.357184   19848 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:43:08.357242   19848 driver.go:392] Setting default libvirt URI to qemu:///system
I0520 04:43:08.361544   19848 out.go:177] * Using the qemu2 driver based on existing profile
I0520 04:43:08.368352   19848 start.go:297] selected driver: qemu2
I0520 04:43:08.368356   19848 start.go:901] validating driver "qemu2" against &{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:43:08.368405   19848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0520 04:43:08.370672   19848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0520 04:43:08.370695   19848 cni.go:84] Creating CNI manager for ""
I0520 04:43:08.370701   19848 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0520 04:43:08.370748   19848 start.go:340] cluster config:
{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:43:08.374993   19848 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0520 04:43:08.383542   19848 out.go:177] * Starting "functional-832000" primary control-plane node in "functional-832000" cluster
I0520 04:43:08.387531   19848 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0520 04:43:08.387544   19848 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0520 04:43:08.387553   19848 cache.go:56] Caching tarball of preloaded images
I0520 04:43:08.387605   19848 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0520 04:43:08.387614   19848 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0520 04:43:08.387687   19848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/functional-832000/config.json ...
I0520 04:43:08.388094   19848 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:43:08.388128   19848 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "functional-832000"
I0520 04:43:08.388136   19848 start.go:96] Skipping create...Using existing machine configuration
I0520 04:43:08.388142   19848 fix.go:54] fixHost starting: 
I0520 04:43:08.388254   19848 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
W0520 04:43:08.388261   19848 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:43:08.391675   19848 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
I0520 04:43:08.399599   19848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
I0520 04:43:08.401658   19848 main.go:141] libmachine: STDOUT: 
I0520 04:43:08.401676   19848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:43:08.401702   19848 fix.go:56] duration metric: took 13.561333ms for fixHost
I0520 04:43:08.401704   19848 start.go:83] releasing machines lock for "functional-832000", held for 13.574167ms
W0520 04:43:08.401709   19848 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:43:08.401736   19848 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:43:08.401741   19848 start.go:728] Will try again in 5 seconds ...
I0520 04:43:13.403825   19848 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:43:13.404161   19848 start.go:364] duration metric: took 292.917µs to acquireMachinesLock for "functional-832000"
I0520 04:43:13.404362   19848 start.go:96] Skipping create...Using existing machine configuration
I0520 04:43:13.404376   19848 fix.go:54] fixHost starting: 
I0520 04:43:13.405143   19848 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
W0520 04:43:13.405162   19848 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:43:13.413543   19848 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
I0520 04:43:13.417621   19848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
I0520 04:43:13.426325   19848 main.go:141] libmachine: STDOUT: 
I0520 04:43:13.426379   19848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:43:13.426440   19848 fix.go:56] duration metric: took 22.069459ms for fixHost
I0520 04:43:13.426451   19848 start.go:83] releasing machines lock for "functional-832000", held for 22.210291ms
W0520 04:43:13.426626   19848 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:43:13.433610   19848 out.go:177] 
W0520 04:43:13.437597   19848 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:43:13.437617   19848 out.go:239] * 
W0520 04:43:13.440401   19848 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:43:13.445502   19848 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd204030594/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:41 PDT |                     |
|         | -p download-only-533000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-533000                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | -o=json --download-only                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | -p download-only-341000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-341000                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-533000                                                  | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| delete  | -p download-only-341000                                                  | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | --download-only -p                                                       | binary-mirror-746000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | binary-mirror-746000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53721                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-746000                                                  | binary-mirror-746000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| addons  | enable dashboard -p                                                      | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | addons-892000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | addons-892000                                                            |                      |         |         |                     |                     |
| start   | -p addons-892000 --wait=true                                             | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-892000                                                         | addons-892000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | -p nospam-804000 -n=1 --memory=2250 --wait=false                         | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-804000 --log_dir                                                  | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-804000                                                         | nospam-804000        | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-832000 cache add                                              | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | minikube-local-cache-test:functional-832000                              |                      |         |         |                     |                     |
| cache   | functional-832000 cache delete                                           | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | minikube-local-cache-test:functional-832000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
| ssh     | functional-832000 ssh sudo                                               | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-832000                                                        | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-832000 ssh                                                    | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-832000 cache reload                                           | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
| ssh     | functional-832000 ssh                                                    | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:43 PDT | 20 May 24 04:43 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-832000 kubectl --                                             | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | --context functional-832000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-832000                                                     | functional-832000    | jenkins | v1.33.1 | 20 May 24 04:43 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/20 04:43:08
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0520 04:43:08.307248   19848 out.go:291] Setting OutFile to fd 1 ...
I0520 04:43:08.307384   19848 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:08.307385   19848 out.go:304] Setting ErrFile to fd 2...
I0520 04:43:08.307387   19848 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:08.307488   19848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:43:08.308470   19848 out.go:298] Setting JSON to false
I0520 04:43:08.324354   19848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9759,"bootTime":1716195629,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0520 04:43:08.324468   19848 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0520 04:43:08.328920   19848 out.go:177] * [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0520 04:43:08.337621   19848 out.go:177]   - MINIKUBE_LOCATION=18929
I0520 04:43:08.342605   19848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
I0520 04:43:08.337678   19848 notify.go:220] Checking for updates...
I0520 04:43:08.346548   19848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0520 04:43:08.349572   19848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0520 04:43:08.352547   19848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
I0520 04:43:08.355613   19848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0520 04:43:08.357184   19848 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:43:08.357242   19848 driver.go:392] Setting default libvirt URI to qemu:///system
I0520 04:43:08.361544   19848 out.go:177] * Using the qemu2 driver based on existing profile
I0520 04:43:08.368352   19848 start.go:297] selected driver: qemu2
I0520 04:43:08.368356   19848 start.go:901] validating driver "qemu2" against &{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:43:08.368405   19848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0520 04:43:08.370672   19848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0520 04:43:08.370695   19848 cni.go:84] Creating CNI manager for ""
I0520 04:43:08.370701   19848 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0520 04:43:08.370748   19848 start.go:340] cluster config:
{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:43:08.374993   19848 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0520 04:43:08.383542   19848 out.go:177] * Starting "functional-832000" primary control-plane node in "functional-832000" cluster
I0520 04:43:08.387531   19848 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0520 04:43:08.387544   19848 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0520 04:43:08.387553   19848 cache.go:56] Caching tarball of preloaded images
I0520 04:43:08.387605   19848 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0520 04:43:08.387614   19848 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0520 04:43:08.387687   19848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/functional-832000/config.json ...
I0520 04:43:08.388094   19848 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:43:08.388128   19848 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "functional-832000"
I0520 04:43:08.388136   19848 start.go:96] Skipping create...Using existing machine configuration
I0520 04:43:08.388142   19848 fix.go:54] fixHost starting: 
I0520 04:43:08.388254   19848 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
W0520 04:43:08.388261   19848 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:43:08.391675   19848 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
I0520 04:43:08.399599   19848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
I0520 04:43:08.401658   19848 main.go:141] libmachine: STDOUT: 
I0520 04:43:08.401676   19848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:43:08.401702   19848 fix.go:56] duration metric: took 13.561333ms for fixHost
I0520 04:43:08.401704   19848 start.go:83] releasing machines lock for "functional-832000", held for 13.574167ms
W0520 04:43:08.401709   19848 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:43:08.401736   19848 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:43:08.401741   19848 start.go:728] Will try again in 5 seconds ...
I0520 04:43:13.403825   19848 start.go:360] acquireMachinesLock for functional-832000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:43:13.404161   19848 start.go:364] duration metric: took 292.917µs to acquireMachinesLock for "functional-832000"
I0520 04:43:13.404362   19848 start.go:96] Skipping create...Using existing machine configuration
I0520 04:43:13.404376   19848 fix.go:54] fixHost starting: 
I0520 04:43:13.405143   19848 fix.go:112] recreateIfNeeded on functional-832000: state=Stopped err=<nil>
W0520 04:43:13.405162   19848 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:43:13.413543   19848 out.go:177] * Restarting existing qemu2 VM for "functional-832000" ...
I0520 04:43:13.417621   19848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4f:e8:76:6f:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/functional-832000/disk.qcow2
I0520 04:43:13.426325   19848 main.go:141] libmachine: STDOUT: 
I0520 04:43:13.426379   19848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:43:13.426440   19848 fix.go:56] duration metric: took 22.069459ms for fixHost
I0520 04:43:13.426451   19848 start.go:83] releasing machines lock for "functional-832000", held for 22.210291ms
W0520 04:43:13.426626   19848 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:43:13.433610   19848 out.go:177] 
W0520 04:43:13.437597   19848 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:43:13.437617   19848 out.go:239] * 
W0520 04:43:13.440401   19848 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:43:13.445502   19848 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-832000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-832000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.283459ms)

                                                
                                                
** stderr ** 
	error: context "functional-832000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-832000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-832000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-832000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-832000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-832000 --alsologtostderr -v=1] stderr:
I0520 04:43:54.206346   20059 out.go:291] Setting OutFile to fd 1 ...
I0520 04:43:54.206933   20059 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:54.206937   20059 out.go:304] Setting ErrFile to fd 2...
I0520 04:43:54.206940   20059 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:54.207087   20059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:43:54.207302   20059 mustload.go:65] Loading cluster: functional-832000
I0520 04:43:54.207488   20059 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:43:54.211515   20059 out.go:177] * The control-plane node functional-832000 host is not running: state=Stopped
I0520 04:43:54.214508   20059 out.go:177]   To start a cluster, run: "minikube start -p functional-832000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (41.103625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 status: exit status 7 (72.092417ms)

                                                
                                                
-- stdout --
	functional-832000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-832000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.506375ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-832000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 status -o json: exit status 7 (29.224458ms)

                                                
                                                
-- stdout --
	{"Name":"functional-832000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-832000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (28.978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-832000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-832000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.675791ms)

                                                
                                                
** stderr ** 
	error: context "functional-832000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-832000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-832000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-832000 describe po hello-node-connect: exit status 1 (26.988708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-832000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-832000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-832000 logs -l app=hello-node-connect: exit status 1 (26.910041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-832000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-832000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-832000 describe svc hello-node-connect: exit status 1 (26.611167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-832000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (29.447125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-832000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (27.84825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "echo hello": exit status 83 (44.715625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n"*. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "cat /etc/hostname": exit status 83 (37.748583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-832000"- but got *"* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n"*. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (35.308208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (51.734125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-832000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 "sudo cat /home/docker/cp-test.txt": exit status 83 (38.886959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-832000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-832000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cp functional-832000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3387234632/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 cp functional-832000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3387234632/001/cp-test.txt: exit status 83 (40.408541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-832000 cp functional-832000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3387234632/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.080208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3387234632/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.693125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-832000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (56.056042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-832000 ssh -n functional-832000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-832000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-832000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/19517/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/test/nested/copy/19517/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/test/nested/copy/19517/hosts": exit status 83 (37.987542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/test/nested/copy/19517/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-832000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-832000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (29.169333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/19517.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/19517.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/19517.pem": exit status 83 (41.48825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/19517.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo cat /etc/ssl/certs/19517.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/19517.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-832000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-832000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/19517.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /usr/share/ca-certificates/19517.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /usr/share/ca-certificates/19517.pem": exit status 83 (40.806334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/19517.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo cat /usr/share/ca-certificates/19517.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/19517.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-832000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-832000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (40.58775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-832000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-832000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/195172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/195172.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/195172.pem": exit status 83 (37.771917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/195172.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo cat /etc/ssl/certs/195172.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/195172.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-832000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-832000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/195172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /usr/share/ca-certificates/195172.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /usr/share/ca-certificates/195172.pem": exit status 83 (37.603709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/195172.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo cat /usr/share/ca-certificates/195172.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/195172.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-832000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-832000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.715625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-832000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-832000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (28.964083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-832000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-832000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.861833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-832000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-832000 -n functional-832000: exit status 7 (28.35675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo systemctl is-active crio": exit status 83 (43.248125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0520 04:43:14.078863   19896 out.go:291] Setting OutFile to fd 1 ...
I0520 04:43:14.079223   19896 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:14.079229   19896 out.go:304] Setting ErrFile to fd 2...
I0520 04:43:14.079232   19896 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:43:14.079381   19896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:43:14.079698   19896 mustload.go:65] Loading cluster: functional-832000
I0520 04:43:14.079921   19896 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:43:14.083794   19896 out.go:177] * The control-plane node functional-832000 host is not running: state=Stopped
I0520 04:43:14.096820   19896 out.go:177]   To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
stdout: * The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 19897: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-832000": client config: context "functional-832000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-832000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-832000 get svc nginx-svc: exit status 1 (72.875417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-832000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-832000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-832000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-832000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.588333ms)

                                                
                                                
** stderr ** 
	error: context "functional-832000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-832000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 service list: exit status 83 (42.197833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-832000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 service list -o json: exit status 83 (40.809625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-832000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 service --namespace=default --https --url hello-node: exit status 83 (39.856708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-832000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 service hello-node --url --format={{.IP}}: exit status 83 (40.762791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-832000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 service hello-node --url: exit status 83 (41.894459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-832000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:1565: failed to parse "* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"": parse "* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 version -o=json --components: exit status 83 (39.802666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-832000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-832000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-832000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-832000 image ls --format short --alsologtostderr:
I0520 04:44:02.357798   20193 out.go:291] Setting OutFile to fd 1 ...
I0520 04:44:02.357940   20193 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.357943   20193 out.go:304] Setting ErrFile to fd 2...
I0520 04:44:02.357945   20193 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.358070   20193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:44:02.358474   20193 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:44:02.358530   20193 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-832000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-832000 image ls --format table --alsologtostderr:
I0520 04:44:02.568751   20205 out.go:291] Setting OutFile to fd 1 ...
I0520 04:44:02.569026   20205 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.569029   20205 out.go:304] Setting ErrFile to fd 2...
I0520 04:44:02.569031   20205 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.569143   20205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:44:02.569516   20205 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:44:02.569575   20205 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-832000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-832000 image ls --format json --alsologtostderr:
I0520 04:44:02.534921   20203 out.go:291] Setting OutFile to fd 1 ...
I0520 04:44:02.535043   20203 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.535046   20203 out.go:304] Setting ErrFile to fd 2...
I0520 04:44:02.535049   20203 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.535167   20203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:44:02.535554   20203 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:44:02.535619   20203 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-832000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-832000 image ls --format yaml --alsologtostderr:
I0520 04:44:02.390930   20195 out.go:291] Setting OutFile to fd 1 ...
I0520 04:44:02.391079   20195 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.391082   20195 out.go:304] Setting ErrFile to fd 2...
I0520 04:44:02.391085   20195 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.391219   20195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:44:02.391609   20195 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:44:02.391670   20195 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh pgrep buildkitd: exit status 83 (41.883333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image build -t localhost/my-image:functional-832000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-832000 image build -t localhost/my-image:functional-832000 testdata/build --alsologtostderr:
I0520 04:44:02.466781   20199 out.go:291] Setting OutFile to fd 1 ...
I0520 04:44:02.467099   20199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.467103   20199 out.go:304] Setting ErrFile to fd 2...
I0520 04:44:02.467105   20199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:44:02.467227   20199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:44:02.467657   20199 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:44:02.468109   20199 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:44:02.468357   20199 build_images.go:133] succeeded building to: 
I0520 04:44:02.468361   20199 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls
functional_test.go:442: expected "localhost/my-image:functional-832000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image load --daemon gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-832000 image load --daemon gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr: (1.381500125s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-832000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image load --daemon gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-832000 image load --daemon gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr: (1.3135935s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-832000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.328018917s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-832000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image load --daemon gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-832000 image load --daemon gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr: (1.19741525s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-832000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image save gcr.io/google-containers/addon-resizer:functional-832000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-832000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-832000 docker-env) && out/minikube-darwin-arm64 status -p functional-832000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-832000 docker-env) && out/minikube-darwin-arm64 status -p functional-832000": exit status 1 (41.866292ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2: exit status 83 (40.90175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:44:02.602472   20207 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:44:02.603446   20207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:44:02.603450   20207 out.go:304] Setting ErrFile to fd 2...
	I0520 04:44:02.603452   20207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:44:02.603570   20207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:44:02.603767   20207 mustload.go:65] Loading cluster: functional-832000
	I0520 04:44:02.603938   20207 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:44:02.608069   20207 out.go:177] * The control-plane node functional-832000 host is not running: state=Stopped
	I0520 04:44:02.612223   20207 out.go:177]   To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2: exit status 83 (44.669292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:44:02.683258   20211 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:44:02.683378   20211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:44:02.683381   20211 out.go:304] Setting ErrFile to fd 2...
	I0520 04:44:02.683383   20211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:44:02.683514   20211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:44:02.683717   20211 mustload.go:65] Loading cluster: functional-832000
	I0520 04:44:02.683944   20211 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:44:02.689195   20211 out.go:177] * The control-plane node functional-832000 host is not running: state=Stopped
	I0520 04:44:02.696267   20211 out.go:177]   To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2: exit status 83 (38.450584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:44:02.644234   20209 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:44:02.644375   20209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:44:02.644378   20209 out.go:304] Setting ErrFile to fd 2...
	I0520 04:44:02.644381   20209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:44:02.644491   20209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:44:02.644686   20209 mustload.go:65] Loading cluster: functional-832000
	I0520 04:44:02.644864   20209 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:44:02.649207   20209 out.go:177] * The control-plane node functional-832000 host is not running: state=Stopped
	I0520 04:44:02.652232   20209 out.go:177]   To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-832000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-832000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-832000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.030227709s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-559000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-559000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.039695916s)

                                                
                                                
-- stdout --
	* [ha-559000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-559000" primary control-plane node in "ha-559000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-559000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:46:14.818006   20268 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:46:14.818131   20268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:46:14.818134   20268 out.go:304] Setting ErrFile to fd 2...
	I0520 04:46:14.818137   20268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:46:14.818249   20268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:46:14.819351   20268 out.go:298] Setting JSON to false
	I0520 04:46:14.835466   20268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9945,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:46:14.835536   20268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:46:14.840714   20268 out.go:177] * [ha-559000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:46:14.847705   20268 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:46:14.851678   20268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:46:14.847740   20268 notify.go:220] Checking for updates...
	I0520 04:46:14.854712   20268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:46:14.857621   20268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:46:14.860717   20268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:46:14.863698   20268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:46:14.866861   20268 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:46:14.870660   20268 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:46:14.877603   20268 start.go:297] selected driver: qemu2
	I0520 04:46:14.877609   20268 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:46:14.877620   20268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:46:14.879787   20268 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:46:14.882666   20268 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:46:14.885771   20268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:46:14.885788   20268 cni.go:84] Creating CNI manager for ""
	I0520 04:46:14.885798   20268 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:46:14.885802   20268 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:46:14.885838   20268 start.go:340] cluster config:
	{Name:ha-559000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:46:14.890208   20268 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:46:14.897788   20268 out.go:177] * Starting "ha-559000" primary control-plane node in "ha-559000" cluster
	I0520 04:46:14.901483   20268 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:46:14.901505   20268 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:46:14.901518   20268 cache.go:56] Caching tarball of preloaded images
	I0520 04:46:14.901576   20268 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:46:14.901581   20268 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:46:14.901782   20268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/ha-559000/config.json ...
	I0520 04:46:14.901794   20268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/ha-559000/config.json: {Name:mkedad0f48c1ea6a5cad5bd9742618c7a0736677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:46:14.902109   20268 start.go:360] acquireMachinesLock for ha-559000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:46:14.902154   20268 start.go:364] duration metric: took 37.5µs to acquireMachinesLock for "ha-559000"
	I0520 04:46:14.902167   20268 start.go:93] Provisioning new machine with config: &{Name:ha-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:46:14.902204   20268 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:46:14.910522   20268 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:46:14.926674   20268 start.go:159] libmachine.API.Create for "ha-559000" (driver="qemu2")
	I0520 04:46:14.926697   20268 client.go:168] LocalClient.Create starting
	I0520 04:46:14.926756   20268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:46:14.926784   20268 main.go:141] libmachine: Decoding PEM data...
	I0520 04:46:14.926797   20268 main.go:141] libmachine: Parsing certificate...
	I0520 04:46:14.926833   20268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:46:14.926856   20268 main.go:141] libmachine: Decoding PEM data...
	I0520 04:46:14.926864   20268 main.go:141] libmachine: Parsing certificate...
	I0520 04:46:14.927292   20268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:46:15.109494   20268 main.go:141] libmachine: Creating SSH key...
	I0520 04:46:15.403914   20268 main.go:141] libmachine: Creating Disk image...
	I0520 04:46:15.403923   20268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:46:15.404180   20268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:46:15.417512   20268 main.go:141] libmachine: STDOUT: 
	I0520 04:46:15.417536   20268 main.go:141] libmachine: STDERR: 
	I0520 04:46:15.417599   20268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2 +20000M
	I0520 04:46:15.428783   20268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:46:15.428803   20268 main.go:141] libmachine: STDERR: 
	I0520 04:46:15.428833   20268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:46:15.428838   20268 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:46:15.428863   20268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d8:b0:1f:ed:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:46:15.430572   20268 main.go:141] libmachine: STDOUT: 
	I0520 04:46:15.430588   20268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:46:15.430612   20268 client.go:171] duration metric: took 503.916958ms to LocalClient.Create
	I0520 04:46:17.432784   20268 start.go:128] duration metric: took 2.530585042s to createHost
	I0520 04:46:17.432853   20268 start.go:83] releasing machines lock for "ha-559000", held for 2.530721375s
	W0520 04:46:17.432917   20268 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:46:17.444375   20268 out.go:177] * Deleting "ha-559000" in qemu2 ...
	W0520 04:46:17.469916   20268 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:46:17.469943   20268 start.go:728] Will try again in 5 seconds ...
	I0520 04:46:22.472092   20268 start.go:360] acquireMachinesLock for ha-559000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:46:22.472571   20268 start.go:364] duration metric: took 379.333µs to acquireMachinesLock for "ha-559000"
	I0520 04:46:22.472679   20268 start.go:93] Provisioning new machine with config: &{Name:ha-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:46:22.472938   20268 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:46:22.483443   20268 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:46:22.532164   20268 start.go:159] libmachine.API.Create for "ha-559000" (driver="qemu2")
	I0520 04:46:22.532218   20268 client.go:168] LocalClient.Create starting
	I0520 04:46:22.532325   20268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:46:22.532395   20268 main.go:141] libmachine: Decoding PEM data...
	I0520 04:46:22.532418   20268 main.go:141] libmachine: Parsing certificate...
	I0520 04:46:22.532474   20268 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:46:22.532536   20268 main.go:141] libmachine: Decoding PEM data...
	I0520 04:46:22.532549   20268 main.go:141] libmachine: Parsing certificate...
	I0520 04:46:22.533541   20268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:46:22.672928   20268 main.go:141] libmachine: Creating SSH key...
	I0520 04:46:22.758567   20268 main.go:141] libmachine: Creating Disk image...
	I0520 04:46:22.758572   20268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:46:22.758736   20268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:46:22.771381   20268 main.go:141] libmachine: STDOUT: 
	I0520 04:46:22.771415   20268 main.go:141] libmachine: STDERR: 
	I0520 04:46:22.771462   20268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2 +20000M
	I0520 04:46:22.782630   20268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:46:22.782644   20268 main.go:141] libmachine: STDERR: 
	I0520 04:46:22.782660   20268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:46:22.782665   20268 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:46:22.782698   20268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f7:4a:f7:42:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:46:22.784388   20268 main.go:141] libmachine: STDOUT: 
	I0520 04:46:22.784404   20268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:46:22.784416   20268 client.go:171] duration metric: took 252.194583ms to LocalClient.Create
	I0520 04:46:24.786719   20268 start.go:128] duration metric: took 2.313729416s to createHost
	I0520 04:46:24.786912   20268 start.go:83] releasing machines lock for "ha-559000", held for 2.3143415s
	W0520 04:46:24.787377   20268 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-559000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-559000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:46:24.800939   20268 out.go:177] 
	W0520 04:46:24.802503   20268 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:46:24.802528   20268 out.go:239] * 
	* 
	W0520 04:46:24.805031   20268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:46:24.815941   20268 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-559000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (68.139375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (84.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.279208ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-559000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- rollout status deployment/busybox: exit status 1 (55.710959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.07025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.636667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.265041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.562583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.624708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.01675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.867916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.034167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.29875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.514792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.806583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.514417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.803458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.488291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (29.882541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (84.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-559000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.753959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-559000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (29.15825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-559000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-559000 -v=7 --alsologtostderr: exit status 83 (42.319375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-559000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:49.049497   20369 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:49.050108   20369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.050112   20369 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:49.050116   20369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.050274   20369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:49.050487   20369 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:49.050677   20369 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:49.055413   20369 out.go:177] * The control-plane node ha-559000 host is not running: state=Stopped
	I0520 04:47:49.058440   20369 out.go:177]   To start a cluster, run: "minikube start -p ha-559000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-559000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (29.643875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-559000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-559000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.333709ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-559000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-559000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-559000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (29.171792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-559000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-559000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (28.9465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status --output json -v=7 --alsologtostderr: exit status 7 (29.137416ms)

                                                
                                                
-- stdout --
	{"Name":"ha-559000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:49.275883   20382 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:49.276031   20382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.276034   20382 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:49.276037   20382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.276164   20382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:49.276283   20382 out.go:298] Setting JSON to true
	I0520 04:47:49.276292   20382 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:49.276352   20382 notify.go:220] Checking for updates...
	I0520 04:47:49.276492   20382 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:49.276499   20382 status.go:255] checking status of ha-559000 ...
	I0520 04:47:49.276702   20382 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:47:49.276706   20382 status.go:343] host is not running, skipping remaining checks
	I0520 04:47:49.276708   20382 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-559000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (28.749208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.909709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:49.334335   20386 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:49.334912   20386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.334916   20386 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:49.334918   20386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.335064   20386 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:49.335295   20386 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:49.335499   20386 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:49.340017   20386 out.go:177] 
	W0520 04:47:49.344063   20386 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0520 04:47:49.344067   20386 out.go:239] * 
	* 
	W0520 04:47:49.346538   20386 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:47:49.349956   20386 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-559000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (29.142084ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:49.381479   20388 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:49.381637   20388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.381641   20388 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:49.381643   20388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.381752   20388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:49.381876   20388 out.go:298] Setting JSON to false
	I0520 04:47:49.381885   20388 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:49.381951   20388 notify.go:220] Checking for updates...
	I0520 04:47:49.382078   20388 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:49.382085   20388 status.go:255] checking status of ha-559000 ...
	I0520 04:47:49.382302   20388 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:47:49.382307   20388 status.go:343] host is not running, skipping remaining checks
	I0520 04:47:49.382310   20388 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr": ha-559000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr": ha-559000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr": ha-559000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr": ha-559000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (28.968375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-559000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (28.94875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 node start m02 -v=7 --alsologtostderr: exit status 85 (44.618292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:49.538314   20398 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:49.538893   20398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.538897   20398 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:49.538899   20398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.539070   20398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:49.539282   20398 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:49.539479   20398 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:49.542826   20398 out.go:177] 
	W0520 04:47:49.545637   20398 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0520 04:47:49.545642   20398 out.go:239] * 
	* 
	W0520 04:47:49.548009   20398 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:47:49.550618   20398 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0520 04:47:49.538314   20398 out.go:291] Setting OutFile to fd 1 ...
I0520 04:47:49.538893   20398 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:47:49.538897   20398 out.go:304] Setting ErrFile to fd 2...
I0520 04:47:49.538899   20398 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:47:49.539070   20398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:47:49.539282   20398 mustload.go:65] Loading cluster: ha-559000
I0520 04:47:49.539479   20398 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:47:49.542826   20398 out.go:177] 
W0520 04:47:49.545637   20398 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0520 04:47:49.545642   20398 out.go:239] * 
* 
W0520 04:47:49.548009   20398 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:47:49.550618   20398 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-559000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (29.375583ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:49.583394   20400 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:49.583560   20400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.583563   20400 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:49.583566   20400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:49.583690   20400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:49.583800   20400 out.go:298] Setting JSON to false
	I0520 04:47:49.583809   20400 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:49.583867   20400 notify.go:220] Checking for updates...
	I0520 04:47:49.583990   20400 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:49.583997   20400 status.go:255] checking status of ha-559000 ...
	I0520 04:47:49.584214   20400 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:47:49.584217   20400 status.go:343] host is not running, skipping remaining checks
	I0520 04:47:49.584220   20400 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (74.530958ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:51.151067   20402 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:51.151272   20402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:51.151276   20402 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:51.151279   20402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:51.151447   20402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:51.151596   20402 out.go:298] Setting JSON to false
	I0520 04:47:51.151609   20402 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:51.151653   20402 notify.go:220] Checking for updates...
	I0520 04:47:51.151880   20402 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:51.151889   20402 status.go:255] checking status of ha-559000 ...
	I0520 04:47:51.152165   20402 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:47:51.152170   20402 status.go:343] host is not running, skipping remaining checks
	I0520 04:47:51.152173   20402 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (76.483542ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:52.765319   20404 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:52.765549   20404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:52.765554   20404 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:52.765558   20404 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:52.765758   20404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:52.765933   20404 out.go:298] Setting JSON to false
	I0520 04:47:52.765946   20404 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:52.765999   20404 notify.go:220] Checking for updates...
	I0520 04:47:52.766257   20404 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:52.766268   20404 status.go:255] checking status of ha-559000 ...
	I0520 04:47:52.766590   20404 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:47:52.766595   20404 status.go:343] host is not running, skipping remaining checks
	I0520 04:47:52.766599   20404 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (74.547875ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:47:55.284364   20406 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:47:55.284573   20406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:55.284578   20406 out.go:304] Setting ErrFile to fd 2...
	I0520 04:47:55.284582   20406 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:47:55.284766   20406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:47:55.284933   20406 out.go:298] Setting JSON to false
	I0520 04:47:55.284946   20406 mustload.go:65] Loading cluster: ha-559000
	I0520 04:47:55.284997   20406 notify.go:220] Checking for updates...
	I0520 04:47:55.285242   20406 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:47:55.285255   20406 status.go:255] checking status of ha-559000 ...
	I0520 04:47:55.285591   20406 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:47:55.285597   20406 status.go:343] host is not running, skipping remaining checks
	I0520 04:47:55.285600   20406 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (72.778916ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:00.022604   20408 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:00.022808   20408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:00.022813   20408 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:00.022816   20408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:00.022986   20408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:00.023160   20408 out.go:298] Setting JSON to false
	I0520 04:48:00.023171   20408 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:00.023217   20408 notify.go:220] Checking for updates...
	I0520 04:48:00.023458   20408 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:00.023466   20408 status.go:255] checking status of ha-559000 ...
	I0520 04:48:00.023741   20408 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:48:00.023746   20408 status.go:343] host is not running, skipping remaining checks
	I0520 04:48:00.023764   20408 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (75.080084ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:04.289757   20410 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:04.290016   20410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:04.290021   20410 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:04.290024   20410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:04.290224   20410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:04.290399   20410 out.go:298] Setting JSON to false
	I0520 04:48:04.290412   20410 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:04.290456   20410 notify.go:220] Checking for updates...
	I0520 04:48:04.290679   20410 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:04.290688   20410 status.go:255] checking status of ha-559000 ...
	I0520 04:48:04.290985   20410 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:48:04.290990   20410 status.go:343] host is not running, skipping remaining checks
	I0520 04:48:04.290993   20410 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (72.245834ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:15.751606   20415 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:15.751838   20415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:15.751843   20415 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:15.751846   20415 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:15.752025   20415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:15.752193   20415 out.go:298] Setting JSON to false
	I0520 04:48:15.752205   20415 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:15.752250   20415 notify.go:220] Checking for updates...
	I0520 04:48:15.752488   20415 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:15.752498   20415 status.go:255] checking status of ha-559000 ...
	I0520 04:48:15.752774   20415 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:48:15.752780   20415 status.go:343] host is not running, skipping remaining checks
	I0520 04:48:15.752782   20415 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (72.344167ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:29.202619   20419 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:29.202827   20419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:29.202831   20419 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:29.202834   20419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:29.202988   20419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:29.203142   20419 out.go:298] Setting JSON to false
	I0520 04:48:29.203153   20419 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:29.203185   20419 notify.go:220] Checking for updates...
	I0520 04:48:29.203432   20419 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:29.203440   20419 status.go:255] checking status of ha-559000 ...
	I0520 04:48:29.203744   20419 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:48:29.203749   20419 status.go:343] host is not running, skipping remaining checks
	I0520 04:48:29.203752   20419 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (72.9135ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:39.484557   20423 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:39.484777   20423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:39.484781   20423 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:39.484784   20423 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:39.484949   20423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:39.485098   20423 out.go:298] Setting JSON to false
	I0520 04:48:39.485109   20423 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:39.485154   20423 notify.go:220] Checking for updates...
	I0520 04:48:39.485364   20423 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:39.485373   20423 status.go:255] checking status of ha-559000 ...
	I0520 04:48:39.485637   20423 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:48:39.485642   20423 status.go:343] host is not running, skipping remaining checks
	I0520 04:48:39.485645   20423 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (32.910292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-559000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-559000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (28.886375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-559000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-559000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-559000 -v=7 --alsologtostderr: (2.597489958s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-559000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-559000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.213896959s)

                                                
                                                
-- stdout --
	* [ha-559000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-559000" primary control-plane node in "ha-559000" cluster
	* Restarting existing qemu2 VM for "ha-559000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-559000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:42.309302   20453 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:42.309477   20453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:42.309480   20453 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:42.309483   20453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:42.309635   20453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:42.310860   20453 out.go:298] Setting JSON to false
	I0520 04:48:42.329918   20453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10093,"bootTime":1716195629,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:48:42.329991   20453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:48:42.333818   20453 out.go:177] * [ha-559000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:48:42.340828   20453 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:48:42.344758   20453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:48:42.340867   20453 notify.go:220] Checking for updates...
	I0520 04:48:42.347878   20453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:48:42.350814   20453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:48:42.353818   20453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:48:42.356841   20453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:48:42.360220   20453 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:42.360282   20453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:48:42.364794   20453 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:48:42.371794   20453 start.go:297] selected driver: qemu2
	I0520 04:48:42.371801   20453 start.go:901] validating driver "qemu2" against &{Name:ha-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:48:42.371862   20453 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:48:42.374041   20453 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:48:42.374066   20453 cni.go:84] Creating CNI manager for ""
	I0520 04:48:42.374070   20453 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:48:42.374111   20453 start.go:340] cluster config:
	{Name:ha-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-559000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:48:42.378263   20453 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:48:42.385800   20453 out.go:177] * Starting "ha-559000" primary control-plane node in "ha-559000" cluster
	I0520 04:48:42.389833   20453 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:48:42.389846   20453 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:48:42.389856   20453 cache.go:56] Caching tarball of preloaded images
	I0520 04:48:42.389907   20453 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:48:42.389913   20453 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:48:42.389966   20453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/ha-559000/config.json ...
	I0520 04:48:42.390362   20453 start.go:360] acquireMachinesLock for ha-559000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:48:42.390397   20453 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "ha-559000"
	I0520 04:48:42.390406   20453 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:48:42.390412   20453 fix.go:54] fixHost starting: 
	I0520 04:48:42.390528   20453 fix.go:112] recreateIfNeeded on ha-559000: state=Stopped err=<nil>
	W0520 04:48:42.390535   20453 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:48:42.398855   20453 out.go:177] * Restarting existing qemu2 VM for "ha-559000" ...
	I0520 04:48:42.402792   20453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f7:4a:f7:42:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:48:42.404725   20453 main.go:141] libmachine: STDOUT: 
	I0520 04:48:42.404746   20453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:48:42.404771   20453 fix.go:56] duration metric: took 14.359583ms for fixHost
	I0520 04:48:42.404775   20453 start.go:83] releasing machines lock for "ha-559000", held for 14.374584ms
	W0520 04:48:42.404781   20453 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:48:42.404807   20453 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:48:42.404812   20453 start.go:728] Will try again in 5 seconds ...
	I0520 04:48:47.406915   20453 start.go:360] acquireMachinesLock for ha-559000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:48:47.407296   20453 start.go:364] duration metric: took 300.208µs to acquireMachinesLock for "ha-559000"
	I0520 04:48:47.407415   20453 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:48:47.407437   20453 fix.go:54] fixHost starting: 
	I0520 04:48:47.408132   20453 fix.go:112] recreateIfNeeded on ha-559000: state=Stopped err=<nil>
	W0520 04:48:47.408163   20453 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:48:47.412593   20453 out.go:177] * Restarting existing qemu2 VM for "ha-559000" ...
	I0520 04:48:47.416671   20453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f7:4a:f7:42:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:48:47.425614   20453 main.go:141] libmachine: STDOUT: 
	I0520 04:48:47.425673   20453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:48:47.425732   20453 fix.go:56] duration metric: took 18.302334ms for fixHost
	I0520 04:48:47.425744   20453 start.go:83] releasing machines lock for "ha-559000", held for 18.429167ms
	W0520 04:48:47.425931   20453 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-559000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-559000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:48:47.433352   20453 out.go:177] 
	W0520 04:48:47.437573   20453 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:48:47.437623   20453 out.go:239] * 
	* 
	W0520 04:48:47.440051   20453 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:48:47.447552   20453 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-559000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-559000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (33.398375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.980709ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-559000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:47.591206   20465 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:47.591620   20465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:47.591624   20465 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:47.591626   20465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:47.591756   20465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:47.591977   20465 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:47.592167   20465 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:47.596371   20465 out.go:177] * The control-plane node ha-559000 host is not running: state=Stopped
	I0520 04:48:47.600511   20465 out.go:177]   To start a cluster, run: "minikube start -p ha-559000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-559000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (29.945208ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:47.633602   20467 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:47.633771   20467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:47.633774   20467 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:47.633776   20467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:47.633925   20467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:47.634043   20467 out.go:298] Setting JSON to false
	I0520 04:48:47.634055   20467 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:47.634116   20467 notify.go:220] Checking for updates...
	I0520 04:48:47.634251   20467 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:47.634257   20467 status.go:255] checking status of ha-559000 ...
	I0520 04:48:47.634469   20467 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:48:47.634472   20467 status.go:343] host is not running, skipping remaining checks
	I0520 04:48:47.634475   20467 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (28.881667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-559000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (28.818542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-559000 stop -v=7 --alsologtostderr: (3.674694875s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr: exit status 7 (64.010541ms)

                                                
                                                
-- stdout --
	ha-559000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:51.499800   20495 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:51.499979   20495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:51.499984   20495 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:51.499987   20495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:51.500169   20495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:51.500318   20495 out.go:298] Setting JSON to false
	I0520 04:48:51.500329   20495 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:51.500364   20495 notify.go:220] Checking for updates...
	I0520 04:48:51.500593   20495 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:51.500606   20495 status.go:255] checking status of ha-559000 ...
	I0520 04:48:51.500887   20495 status.go:330] ha-559000 host status = "Stopped" (err=<nil>)
	I0520 04:48:51.500892   20495 status.go:343] host is not running, skipping remaining checks
	I0520 04:48:51.500895   20495 status.go:257] ha-559000 status: &{Name:ha-559000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr": ha-559000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr": ha-559000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-559000 status -v=7 --alsologtostderr": ha-559000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (32.667167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-559000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-559000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183002416s)

                                                
                                                
-- stdout --
	* [ha-559000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-559000" primary control-plane node in "ha-559000" cluster
	* Restarting existing qemu2 VM for "ha-559000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-559000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:51.562091   20499 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:51.562205   20499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:51.562209   20499 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:51.562212   20499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:51.562338   20499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:51.563293   20499 out.go:298] Setting JSON to false
	I0520 04:48:51.579306   20499 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10102,"bootTime":1716195629,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:48:51.579374   20499 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:48:51.584842   20499 out.go:177] * [ha-559000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:48:51.592760   20499 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:48:51.592808   20499 notify.go:220] Checking for updates...
	I0520 04:48:51.597951   20499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:48:51.600806   20499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:48:51.603721   20499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:48:51.606751   20499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:48:51.609717   20499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:48:51.613072   20499 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:51.613351   20499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:48:51.617725   20499 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:48:51.624737   20499 start.go:297] selected driver: qemu2
	I0520 04:48:51.624743   20499 start.go:901] validating driver "qemu2" against &{Name:ha-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-559000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:48:51.624795   20499 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:48:51.627039   20499 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:48:51.627064   20499 cni.go:84] Creating CNI manager for ""
	I0520 04:48:51.627069   20499 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:48:51.627121   20499 start.go:340] cluster config:
	{Name:ha-559000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-559000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:48:51.631415   20499 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:48:51.638720   20499 out.go:177] * Starting "ha-559000" primary control-plane node in "ha-559000" cluster
	I0520 04:48:51.642728   20499 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:48:51.642744   20499 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:48:51.642757   20499 cache.go:56] Caching tarball of preloaded images
	I0520 04:48:51.642804   20499 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:48:51.642809   20499 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:48:51.642871   20499 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/ha-559000/config.json ...
	I0520 04:48:51.643267   20499 start.go:360] acquireMachinesLock for ha-559000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:48:51.643294   20499 start.go:364] duration metric: took 21µs to acquireMachinesLock for "ha-559000"
	I0520 04:48:51.643304   20499 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:48:51.643310   20499 fix.go:54] fixHost starting: 
	I0520 04:48:51.643428   20499 fix.go:112] recreateIfNeeded on ha-559000: state=Stopped err=<nil>
	W0520 04:48:51.643436   20499 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:48:51.651726   20499 out.go:177] * Restarting existing qemu2 VM for "ha-559000" ...
	I0520 04:48:51.654735   20499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f7:4a:f7:42:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:48:51.656720   20499 main.go:141] libmachine: STDOUT: 
	I0520 04:48:51.656740   20499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:48:51.656767   20499 fix.go:56] duration metric: took 13.458583ms for fixHost
	I0520 04:48:51.656770   20499 start.go:83] releasing machines lock for "ha-559000", held for 13.472ms
	W0520 04:48:51.656778   20499 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:48:51.656817   20499 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:48:51.656822   20499 start.go:728] Will try again in 5 seconds ...
	I0520 04:48:56.658928   20499 start.go:360] acquireMachinesLock for ha-559000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:48:56.659353   20499 start.go:364] duration metric: took 314.833µs to acquireMachinesLock for "ha-559000"
	I0520 04:48:56.659474   20499 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:48:56.659491   20499 fix.go:54] fixHost starting: 
	I0520 04:48:56.660202   20499 fix.go:112] recreateIfNeeded on ha-559000: state=Stopped err=<nil>
	W0520 04:48:56.660228   20499 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:48:56.668606   20499 out.go:177] * Restarting existing qemu2 VM for "ha-559000" ...
	I0520 04:48:56.672773   20499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f7:4a:f7:42:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/ha-559000/disk.qcow2
	I0520 04:48:56.681659   20499 main.go:141] libmachine: STDOUT: 
	I0520 04:48:56.681715   20499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:48:56.681784   20499 fix.go:56] duration metric: took 22.291ms for fixHost
	I0520 04:48:56.681799   20499 start.go:83] releasing machines lock for "ha-559000", held for 22.420625ms
	W0520 04:48:56.681995   20499 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-559000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-559000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:48:56.689573   20499 out.go:177] 
	W0520 04:48:56.693692   20499 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:48:56.693716   20499 out.go:239] * 
	* 
	W0520 04:48:56.696469   20499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:48:56.704539   20499 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-559000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (67.997875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-559000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (29.418167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-559000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-559000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.234708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-559000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-559000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:48:56.916046   20515 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:48:56.916216   20515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:56.916220   20515 out.go:304] Setting ErrFile to fd 2...
	I0520 04:48:56.916222   20515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:48:56.916359   20515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:48:56.916594   20515 mustload.go:65] Loading cluster: ha-559000
	I0520 04:48:56.916786   20515 config.go:182] Loaded profile config "ha-559000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:48:56.920803   20515 out.go:177] * The control-plane node ha-559000 host is not running: state=Stopped
	I0520 04:48:56.923778   20515 out.go:177]   To start a cluster, run: "minikube start -p ha-559000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-559000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (29.396708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-559000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-559000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-559000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-559000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-559000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-559000 -n ha-559000: exit status 7 (29.384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-559000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-223000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-223000 --driver=qemu2 : exit status 80 (9.763379709s)

                                                
                                                
-- stdout --
	* [image-223000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-223000" primary control-plane node in "image-223000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-223000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-223000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-223000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-223000 -n image-223000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-223000 -n image-223000: exit status 7 (68.98825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-223000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-086000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-086000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.665092916s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"894b7416-cd19-43e9-a9e8-89c067b6f2ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-086000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b4c1be7-4ca7-4c70-9d25-d455621f1a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18929"}}
	{"specversion":"1.0","id":"487ebc2b-378c-4ab5-adfc-caedd14f5419","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig"}}
	{"specversion":"1.0","id":"9178e982-7cf8-489d-83ef-a8b2962809a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"317cbbb1-6d32-4c78-8908-81dd076fa91f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c9ddc155-d79c-40ff-95ae-fde1eeb62a58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube"}}
	{"specversion":"1.0","id":"65981ec4-56e8-492c-af69-15809c82a921","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"078ef1ac-a231-4cd1-8ad4-ec6f61297665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"90dfc6a4-138c-43bd-97ab-6bbdb2c66145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"52937d1f-6b2e-4b30-a913-3dfbf6e56034","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-086000\" primary control-plane node in \"json-output-086000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"40c35d27-765d-46ad-9769-5ef7bbb7ac87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"ecf001b2-4903-4c4e-baa4-8c554f498198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-086000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"d71e1c44-a5b8-4ac6-982b-9f265e106181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"50ac2e39-261e-4724-8ba3-a89be3740fff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a340560b-d0cd-4fc2-9992-76deece6b363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-086000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fcb629c1-5377-4349-80c3-c14eb479bac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"8ca8805c-b151-4107-837a-10580b6e68f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-086000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-086000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-086000 --output=json --user=testUser: exit status 83 (77.664458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32696e8f-9e99-4693-9942-b71897010581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-086000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"3afee0e8-9288-4365-ab6b-7a03218e7517","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-086000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-086000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-086000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-086000 --output=json --user=testUser: exit status 83 (46.328208ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-086000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-086000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-086000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-086000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-346000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-346000 --driver=qemu2 : exit status 80 (9.761775625s)

                                                
                                                
-- stdout --
	* [first-346000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-346000" primary control-plane node in "first-346000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-346000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-346000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-346000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-20 04:49:30.090595 -0700 PDT m=+451.019823167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-348000 -n second-348000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-348000 -n second-348000: exit status 85 (75.9965ms)

                                                
                                                
-- stdout --
	* Profile "second-348000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-348000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-348000" host is not running, skipping log retrieval (state="* Profile \"second-348000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-348000\"")
helpers_test.go:175: Cleaning up "second-348000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-348000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-20 04:49:30.397231 -0700 PDT m=+451.326462459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-346000 -n first-346000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-346000 -n first-346000: exit status 7 (29.318416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-346000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-346000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-346000
--- FAIL: TestMinikubeProfile (10.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-182000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-182000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.878017208s)

                                                
                                                
-- stdout --
	* [mount-start-1-182000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-182000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-182000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-182000 -n mount-start-1-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-182000 -n mount-start-1-182000: exit status 7 (67.979709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-964000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-964000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.811602875s)

                                                
                                                
-- stdout --
	* [multinode-964000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-964000" primary control-plane node in "multinode-964000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-964000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:49:40.820633   20685 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:49:40.820765   20685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:49:40.820768   20685 out.go:304] Setting ErrFile to fd 2...
	I0520 04:49:40.820771   20685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:49:40.820885   20685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:49:40.822035   20685 out.go:298] Setting JSON to false
	I0520 04:49:40.838088   20685 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10151,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:49:40.838159   20685 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:49:40.843176   20685 out.go:177] * [multinode-964000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:49:40.849089   20685 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:49:40.849152   20685 notify.go:220] Checking for updates...
	I0520 04:49:40.856044   20685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:49:40.859022   20685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:49:40.862106   20685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:49:40.865043   20685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:49:40.866358   20685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:49:40.869141   20685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:49:40.873044   20685 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:49:40.878019   20685 start.go:297] selected driver: qemu2
	I0520 04:49:40.878025   20685 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:49:40.878030   20685 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:49:40.880183   20685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:49:40.883043   20685 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:49:40.886168   20685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:49:40.886183   20685 cni.go:84] Creating CNI manager for ""
	I0520 04:49:40.886187   20685 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:49:40.886190   20685 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:49:40.886230   20685 start.go:340] cluster config:
	{Name:multinode-964000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:49:40.890633   20685 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:49:40.898079   20685 out.go:177] * Starting "multinode-964000" primary control-plane node in "multinode-964000" cluster
	I0520 04:49:40.902019   20685 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:49:40.902039   20685 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:49:40.902048   20685 cache.go:56] Caching tarball of preloaded images
	I0520 04:49:40.902112   20685 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:49:40.902117   20685 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:49:40.902311   20685 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/multinode-964000/config.json ...
	I0520 04:49:40.902323   20685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/multinode-964000/config.json: {Name:mk7b3f06f7f6162d118a9d866d0f3557c38ed7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:49:40.902543   20685 start.go:360] acquireMachinesLock for multinode-964000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:49:40.902577   20685 start.go:364] duration metric: took 27.834µs to acquireMachinesLock for "multinode-964000"
	I0520 04:49:40.902589   20685 start.go:93] Provisioning new machine with config: &{Name:multinode-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:49:40.902615   20685 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:49:40.910152   20685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:49:40.927379   20685 start.go:159] libmachine.API.Create for "multinode-964000" (driver="qemu2")
	I0520 04:49:40.927404   20685 client.go:168] LocalClient.Create starting
	I0520 04:49:40.927482   20685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:49:40.927509   20685 main.go:141] libmachine: Decoding PEM data...
	I0520 04:49:40.927521   20685 main.go:141] libmachine: Parsing certificate...
	I0520 04:49:40.927552   20685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:49:40.927574   20685 main.go:141] libmachine: Decoding PEM data...
	I0520 04:49:40.927584   20685 main.go:141] libmachine: Parsing certificate...
	I0520 04:49:40.927929   20685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:49:41.054412   20685 main.go:141] libmachine: Creating SSH key...
	I0520 04:49:41.208136   20685 main.go:141] libmachine: Creating Disk image...
	I0520 04:49:41.208142   20685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:49:41.208327   20685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:49:41.221018   20685 main.go:141] libmachine: STDOUT: 
	I0520 04:49:41.221053   20685 main.go:141] libmachine: STDERR: 
	I0520 04:49:41.221095   20685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2 +20000M
	I0520 04:49:41.232101   20685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:49:41.232116   20685 main.go:141] libmachine: STDERR: 
	I0520 04:49:41.232135   20685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:49:41.232139   20685 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:49:41.232175   20685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:92:a1:6e:52:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:49:41.233908   20685 main.go:141] libmachine: STDOUT: 
	I0520 04:49:41.233923   20685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:49:41.233940   20685 client.go:171] duration metric: took 306.534166ms to LocalClient.Create
	I0520 04:49:43.236118   20685 start.go:128] duration metric: took 2.333509458s to createHost
	I0520 04:49:43.236201   20685 start.go:83] releasing machines lock for "multinode-964000", held for 2.333644625s
	W0520 04:49:43.236253   20685 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:49:43.243515   20685 out.go:177] * Deleting "multinode-964000" in qemu2 ...
	W0520 04:49:43.269326   20685 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:49:43.269355   20685 start.go:728] Will try again in 5 seconds ...
	I0520 04:49:48.271469   20685 start.go:360] acquireMachinesLock for multinode-964000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:49:48.271912   20685 start.go:364] duration metric: took 353.708µs to acquireMachinesLock for "multinode-964000"
	I0520 04:49:48.272058   20685 start.go:93] Provisioning new machine with config: &{Name:multinode-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:49:48.272367   20685 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:49:48.281962   20685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:49:48.330763   20685 start.go:159] libmachine.API.Create for "multinode-964000" (driver="qemu2")
	I0520 04:49:48.330828   20685 client.go:168] LocalClient.Create starting
	I0520 04:49:48.330927   20685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:49:48.330991   20685 main.go:141] libmachine: Decoding PEM data...
	I0520 04:49:48.331010   20685 main.go:141] libmachine: Parsing certificate...
	I0520 04:49:48.331076   20685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:49:48.331119   20685 main.go:141] libmachine: Decoding PEM data...
	I0520 04:49:48.331134   20685 main.go:141] libmachine: Parsing certificate...
	I0520 04:49:48.331752   20685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:49:48.472087   20685 main.go:141] libmachine: Creating SSH key...
	I0520 04:49:48.537213   20685 main.go:141] libmachine: Creating Disk image...
	I0520 04:49:48.537219   20685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:49:48.537396   20685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:49:48.549976   20685 main.go:141] libmachine: STDOUT: 
	I0520 04:49:48.550007   20685 main.go:141] libmachine: STDERR: 
	I0520 04:49:48.550066   20685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2 +20000M
	I0520 04:49:48.561011   20685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:49:48.561026   20685 main.go:141] libmachine: STDERR: 
	I0520 04:49:48.561038   20685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:49:48.561052   20685 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:49:48.561088   20685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:97:33:94:ca:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:49:48.562838   20685 main.go:141] libmachine: STDOUT: 
	I0520 04:49:48.562854   20685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:49:48.562867   20685 client.go:171] duration metric: took 232.03525ms to LocalClient.Create
	I0520 04:49:50.565024   20685 start.go:128] duration metric: took 2.292652875s to createHost
	I0520 04:49:50.565186   20685 start.go:83] releasing machines lock for "multinode-964000", held for 2.293218125s
	W0520 04:49:50.565539   20685 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-964000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-964000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:49:50.574279   20685 out.go:177] 
	W0520 04:49:50.579178   20685 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:49:50.579246   20685 out.go:239] * 
	* 
	W0520 04:49:50.582165   20685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:49:50.590056   20685 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-964000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (67.289167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (58.717417ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-964000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- rollout status deployment/busybox: exit status 1 (55.55075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.256833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.823709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.385541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.0195ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.743583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.807667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.8455ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.247084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.066875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.819041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.466666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.263792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.996666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.986125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.658417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.684875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-964000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.397291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.5495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-964000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-964000 -v 3 --alsologtostderr: exit status 83 (37.900334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-964000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-964000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:23.891452   20781 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:23.891607   20781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:23.891610   20781 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:23.891612   20781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:23.891743   20781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:23.891973   20781 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:23.892160   20781 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:23.896731   20781 out.go:177] * The control-plane node multinode-964000 host is not running: state=Stopped
	I0520 04:51:23.899659   20781 out.go:177]   To start a cluster, run: "minikube start -p multinode-964000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-964000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.140875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-964000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-964000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.346292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-964000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-964000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-964000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.3025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-964000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-964000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-964000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"multinode-964000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.410875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status --output json --alsologtostderr: exit status 7 (29.19525ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-964000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:24.115753   20794 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:24.115919   20794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.115922   20794 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:24.115924   20794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.116046   20794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:24.116169   20794 out.go:298] Setting JSON to true
	I0520 04:51:24.116177   20794 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:24.116233   20794 notify.go:220] Checking for updates...
	I0520 04:51:24.116366   20794 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:24.116377   20794 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:24.116585   20794 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:24.116589   20794 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:24.116591   20794 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-964000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.375375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 node stop m03: exit status 85 (47.932125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-964000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status: exit status 7 (28.9215ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr: exit status 7 (29.306167ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:24.252123   20802 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:24.252296   20802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.252299   20802 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:24.252301   20802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.252438   20802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:24.252558   20802 out.go:298] Setting JSON to false
	I0520 04:51:24.252567   20802 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:24.252625   20802 notify.go:220] Checking for updates...
	I0520 04:51:24.252765   20802 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:24.252772   20802 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:24.252974   20802 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:24.252978   20802 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:24.252981   20802 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr": multinode-964000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (28.982291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (58.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.284333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:24.310285   20806 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:24.310689   20806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.310693   20806 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:24.310696   20806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.310853   20806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:24.311069   20806 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:24.311256   20806 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:24.315651   20806 out.go:177] 
	W0520 04:51:24.318643   20806 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0520 04:51:24.318649   20806 out.go:239] * 
	* 
	W0520 04:51:24.321051   20806 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:51:24.324574   20806 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0520 04:51:24.310285   20806 out.go:291] Setting OutFile to fd 1 ...
I0520 04:51:24.310689   20806 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:51:24.310693   20806 out.go:304] Setting ErrFile to fd 2...
I0520 04:51:24.310696   20806 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:51:24.310853   20806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
I0520 04:51:24.311069   20806 mustload.go:65] Loading cluster: multinode-964000
I0520 04:51:24.311256   20806 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:51:24.315651   20806 out.go:177] 
W0520 04:51:24.318643   20806 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0520 04:51:24.318649   20806 out.go:239] * 
* 
W0520 04:51:24.321051   20806 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:51:24.324574   20806 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-964000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (29.029584ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:24.356010   20808 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:24.356169   20808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.356172   20808 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:24.356174   20808 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:24.356310   20808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:24.356432   20808 out.go:298] Setting JSON to false
	I0520 04:51:24.356441   20808 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:24.356492   20808 notify.go:220] Checking for updates...
	I0520 04:51:24.356631   20808 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:24.356638   20808 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:24.356818   20808 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:24.356822   20808 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:24.356824   20808 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (75.490042ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:25.339076   20810 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:25.339297   20810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:25.339301   20810 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:25.339304   20810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:25.339498   20810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:25.339655   20810 out.go:298] Setting JSON to false
	I0520 04:51:25.339669   20810 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:25.339711   20810 notify.go:220] Checking for updates...
	I0520 04:51:25.339985   20810 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:25.339996   20810 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:25.340299   20810 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:25.340304   20810 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:25.340307   20810 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (73.042667ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:27.003585   20812 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:27.003813   20812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:27.003817   20812 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:27.003820   20812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:27.003982   20812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:27.004151   20812 out.go:298] Setting JSON to false
	I0520 04:51:27.004163   20812 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:27.004192   20812 notify.go:220] Checking for updates...
	I0520 04:51:27.004421   20812 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:27.004430   20812 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:27.004694   20812 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:27.004699   20812 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:27.004703   20812 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (74.767708ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:29.728736   20814 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:29.728978   20814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:29.728982   20814 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:29.728985   20814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:29.729145   20814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:29.729322   20814 out.go:298] Setting JSON to false
	I0520 04:51:29.729334   20814 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:29.729375   20814 notify.go:220] Checking for updates...
	I0520 04:51:29.729591   20814 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:29.729601   20814 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:29.729897   20814 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:29.729902   20814 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:29.729905   20814 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (74.914334ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:33.453766   20816 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:33.453945   20816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:33.453950   20816 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:33.453953   20816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:33.454108   20816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:33.454269   20816 out.go:298] Setting JSON to false
	I0520 04:51:33.454281   20816 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:33.454314   20816 notify.go:220] Checking for updates...
	I0520 04:51:33.454551   20816 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:33.454563   20816 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:33.454835   20816 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:33.454840   20816 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:33.454843   20816 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (75.124375ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:36.168890   20818 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:36.169115   20818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:36.169119   20818 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:36.169122   20818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:36.169267   20818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:36.169435   20818 out.go:298] Setting JSON to false
	I0520 04:51:36.169446   20818 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:36.169478   20818 notify.go:220] Checking for updates...
	I0520 04:51:36.169682   20818 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:36.169690   20818 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:36.170018   20818 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:36.170022   20818 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:36.170025   20818 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (76.195417ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:40.618055   20822 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:40.618323   20822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:40.618328   20822 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:40.618331   20822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:40.618530   20822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:40.618732   20822 out.go:298] Setting JSON to false
	I0520 04:51:40.618746   20822 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:40.618790   20822 notify.go:220] Checking for updates...
	I0520 04:51:40.619024   20822 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:40.619033   20822 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:40.619327   20822 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:40.619332   20822 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:40.619335   20822 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (72.722083ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:51:56.443337   20827 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:51:56.443538   20827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:56.443542   20827 out.go:304] Setting ErrFile to fd 2...
	I0520 04:51:56.443545   20827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:51:56.443705   20827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:51:56.443871   20827 out.go:298] Setting JSON to false
	I0520 04:51:56.443887   20827 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:51:56.443915   20827 notify.go:220] Checking for updates...
	I0520 04:51:56.444145   20827 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:51:56.444160   20827 status.go:255] checking status of multinode-964000 ...
	I0520 04:51:56.444428   20827 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:51:56.444433   20827 status.go:343] host is not running, skipping remaining checks
	I0520 04:51:56.444436   20827 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (72.680542ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:52:07.664425   20834 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:52:07.664657   20834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:07.664662   20834 out.go:304] Setting ErrFile to fd 2...
	I0520 04:52:07.664665   20834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:07.664868   20834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:52:07.665023   20834 out.go:298] Setting JSON to false
	I0520 04:52:07.665034   20834 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:52:07.665075   20834 notify.go:220] Checking for updates...
	I0520 04:52:07.665282   20834 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:52:07.665290   20834 status.go:255] checking status of multinode-964000 ...
	I0520 04:52:07.665579   20834 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:52:07.665584   20834 status.go:343] host is not running, skipping remaining checks
	I0520 04:52:07.665587   20834 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr: exit status 7 (73.114ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:52:22.912866   20838 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:52:22.913099   20838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:22.913104   20838 out.go:304] Setting ErrFile to fd 2...
	I0520 04:52:22.913106   20838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:22.913286   20838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:52:22.913444   20838 out.go:298] Setting JSON to false
	I0520 04:52:22.913455   20838 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:52:22.913497   20838 notify.go:220] Checking for updates...
	I0520 04:52:22.913706   20838 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:52:22.913714   20838 status.go:255] checking status of multinode-964000 ...
	I0520 04:52:22.914001   20838 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:52:22.914006   20838 status.go:343] host is not running, skipping remaining checks
	I0520 04:52:22.914008   20838 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-964000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (32.2095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (58.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-964000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-964000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-964000: (3.361393708s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-964000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-964000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.222862459s)

                                                
                                                
-- stdout --
	* [multinode-964000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-964000" primary control-plane node in "multinode-964000" cluster
	* Restarting existing qemu2 VM for "multinode-964000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-964000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:52:26.404080   20862 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:52:26.404245   20862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:26.404249   20862 out.go:304] Setting ErrFile to fd 2...
	I0520 04:52:26.404252   20862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:26.404433   20862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:52:26.405657   20862 out.go:298] Setting JSON to false
	I0520 04:52:26.424963   20862 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10317,"bootTime":1716195629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:52:26.425040   20862 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:52:26.429676   20862 out.go:177] * [multinode-964000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:52:26.436614   20862 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:52:26.436674   20862 notify.go:220] Checking for updates...
	I0520 04:52:26.441887   20862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:52:26.444598   20862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:52:26.447640   20862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:52:26.450589   20862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:52:26.453632   20862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:52:26.456944   20862 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:52:26.457008   20862 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:52:26.461671   20862 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:52:26.468591   20862 start.go:297] selected driver: qemu2
	I0520 04:52:26.468601   20862 start.go:901] validating driver "qemu2" against &{Name:multinode-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:52:26.468676   20862 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:52:26.471153   20862 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:52:26.471195   20862 cni.go:84] Creating CNI manager for ""
	I0520 04:52:26.471199   20862 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:52:26.471251   20862 start.go:340] cluster config:
	{Name:multinode-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-964000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:52:26.475887   20862 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:52:26.482583   20862 out.go:177] * Starting "multinode-964000" primary control-plane node in "multinode-964000" cluster
	I0520 04:52:26.486637   20862 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:52:26.486653   20862 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:52:26.486664   20862 cache.go:56] Caching tarball of preloaded images
	I0520 04:52:26.486727   20862 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:52:26.486733   20862 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:52:26.486791   20862 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/multinode-964000/config.json ...
	I0520 04:52:26.487219   20862 start.go:360] acquireMachinesLock for multinode-964000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:52:26.487262   20862 start.go:364] duration metric: took 35µs to acquireMachinesLock for "multinode-964000"
	I0520 04:52:26.487274   20862 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:52:26.487280   20862 fix.go:54] fixHost starting: 
	I0520 04:52:26.487414   20862 fix.go:112] recreateIfNeeded on multinode-964000: state=Stopped err=<nil>
	W0520 04:52:26.487426   20862 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:52:26.495584   20862 out.go:177] * Restarting existing qemu2 VM for "multinode-964000" ...
	I0520 04:52:26.501401   20862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:97:33:94:ca:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:52:26.503671   20862 main.go:141] libmachine: STDOUT: 
	I0520 04:52:26.503693   20862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:52:26.503735   20862 fix.go:56] duration metric: took 16.443917ms for fixHost
	I0520 04:52:26.503739   20862 start.go:83] releasing machines lock for "multinode-964000", held for 16.472334ms
	W0520 04:52:26.503747   20862 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:52:26.503786   20862 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:52:26.503791   20862 start.go:728] Will try again in 5 seconds ...
	I0520 04:52:31.504661   20862 start.go:360] acquireMachinesLock for multinode-964000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:52:31.505038   20862 start.go:364] duration metric: took 288.792µs to acquireMachinesLock for "multinode-964000"
	I0520 04:52:31.505205   20862 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:52:31.505224   20862 fix.go:54] fixHost starting: 
	I0520 04:52:31.505882   20862 fix.go:112] recreateIfNeeded on multinode-964000: state=Stopped err=<nil>
	W0520 04:52:31.505911   20862 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:52:31.514307   20862 out.go:177] * Restarting existing qemu2 VM for "multinode-964000" ...
	I0520 04:52:31.518486   20862 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:97:33:94:ca:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:52:31.527362   20862 main.go:141] libmachine: STDOUT: 
	I0520 04:52:31.527418   20862 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:52:31.527476   20862 fix.go:56] duration metric: took 22.2535ms for fixHost
	I0520 04:52:31.527493   20862 start.go:83] releasing machines lock for "multinode-964000", held for 22.431792ms
	W0520 04:52:31.527669   20862 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-964000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-964000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:52:31.534296   20862 out.go:177] 
	W0520 04:52:31.538348   20862 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:52:31.538372   20862 out.go:239] * 
	* 
	W0520 04:52:31.541129   20862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:52:31.548335   20862 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-964000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-964000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (32.444583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 node delete m03: exit status 83 (40.752959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-964000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-964000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-964000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr: exit status 7 (29.537583ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:52:31.735520   20876 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:52:31.735658   20876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:31.735661   20876 out.go:304] Setting ErrFile to fd 2...
	I0520 04:52:31.735664   20876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:31.735798   20876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:52:31.735922   20876 out.go:298] Setting JSON to false
	I0520 04:52:31.735931   20876 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:52:31.736001   20876 notify.go:220] Checking for updates...
	I0520 04:52:31.736123   20876 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:52:31.736130   20876 status.go:255] checking status of multinode-964000 ...
	I0520 04:52:31.736330   20876 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:52:31.736335   20876 status.go:343] host is not running, skipping remaining checks
	I0520 04:52:31.736337   20876 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.092167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-964000 stop: (3.147084417s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status: exit status 7 (66.698917ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr: exit status 7 (32.001542ms)

                                                
                                                
-- stdout --
	multinode-964000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:52:35.011036   20900 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:52:35.011180   20900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:35.011183   20900 out.go:304] Setting ErrFile to fd 2...
	I0520 04:52:35.011185   20900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:35.011310   20900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:52:35.011424   20900 out.go:298] Setting JSON to false
	I0520 04:52:35.011433   20900 mustload.go:65] Loading cluster: multinode-964000
	I0520 04:52:35.011488   20900 notify.go:220] Checking for updates...
	I0520 04:52:35.011655   20900 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:52:35.011661   20900 status.go:255] checking status of multinode-964000 ...
	I0520 04:52:35.011862   20900 status.go:330] multinode-964000 host status = "Stopped" (err=<nil>)
	I0520 04:52:35.011866   20900 status.go:343] host is not running, skipping remaining checks
	I0520 04:52:35.011868   20900 status.go:257] multinode-964000 status: &{Name:multinode-964000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr": multinode-964000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-964000 status --alsologtostderr": multinode-964000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (29.372916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-964000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-964000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178100916s)

                                                
                                                
-- stdout --
	* [multinode-964000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-964000" primary control-plane node in "multinode-964000" cluster
	* Restarting existing qemu2 VM for "multinode-964000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-964000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:52:35.069194   20904 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:52:35.069323   20904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:35.069326   20904 out.go:304] Setting ErrFile to fd 2...
	I0520 04:52:35.069328   20904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:52:35.069459   20904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:52:35.070443   20904 out.go:298] Setting JSON to false
	I0520 04:52:35.086476   20904 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10326,"bootTime":1716195629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:52:35.086542   20904 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:52:35.091570   20904 out.go:177] * [multinode-964000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:52:35.099473   20904 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:52:35.102498   20904 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:52:35.099536   20904 notify.go:220] Checking for updates...
	I0520 04:52:35.106404   20904 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:52:35.109495   20904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:52:35.112503   20904 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:52:35.115479   20904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:52:35.118782   20904 config.go:182] Loaded profile config "multinode-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:52:35.119035   20904 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:52:35.123504   20904 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:52:35.130501   20904 start.go:297] selected driver: qemu2
	I0520 04:52:35.130507   20904 start.go:901] validating driver "qemu2" against &{Name:multinode-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:52:35.130578   20904 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:52:35.132745   20904 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:52:35.132765   20904 cni.go:84] Creating CNI manager for ""
	I0520 04:52:35.132770   20904 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:52:35.132815   20904 start.go:340] cluster config:
	{Name:multinode-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-964000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:52:35.137093   20904 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:52:35.144443   20904 out.go:177] * Starting "multinode-964000" primary control-plane node in "multinode-964000" cluster
	I0520 04:52:35.148453   20904 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:52:35.148472   20904 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:52:35.148484   20904 cache.go:56] Caching tarball of preloaded images
	I0520 04:52:35.148552   20904 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:52:35.148558   20904 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:52:35.148615   20904 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/multinode-964000/config.json ...
	I0520 04:52:35.148984   20904 start.go:360] acquireMachinesLock for multinode-964000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:52:35.149017   20904 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "multinode-964000"
	I0520 04:52:35.149028   20904 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:52:35.149036   20904 fix.go:54] fixHost starting: 
	I0520 04:52:35.149144   20904 fix.go:112] recreateIfNeeded on multinode-964000: state=Stopped err=<nil>
	W0520 04:52:35.149151   20904 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:52:35.157425   20904 out.go:177] * Restarting existing qemu2 VM for "multinode-964000" ...
	I0520 04:52:35.160518   20904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:97:33:94:ca:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:52:35.162559   20904 main.go:141] libmachine: STDOUT: 
	I0520 04:52:35.162574   20904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:52:35.162601   20904 fix.go:56] duration metric: took 13.566916ms for fixHost
	I0520 04:52:35.162606   20904 start.go:83] releasing machines lock for "multinode-964000", held for 13.58375ms
	W0520 04:52:35.162613   20904 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:52:35.162651   20904 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:52:35.162656   20904 start.go:728] Will try again in 5 seconds ...
	I0520 04:52:40.164821   20904 start.go:360] acquireMachinesLock for multinode-964000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:52:40.165259   20904 start.go:364] duration metric: took 326.959µs to acquireMachinesLock for "multinode-964000"
	I0520 04:52:40.165384   20904 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:52:40.165404   20904 fix.go:54] fixHost starting: 
	I0520 04:52:40.166070   20904 fix.go:112] recreateIfNeeded on multinode-964000: state=Stopped err=<nil>
	W0520 04:52:40.166096   20904 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:52:40.174485   20904 out.go:177] * Restarting existing qemu2 VM for "multinode-964000" ...
	I0520 04:52:40.178477   20904 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:97:33:94:ca:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/multinode-964000/disk.qcow2
	I0520 04:52:40.187374   20904 main.go:141] libmachine: STDOUT: 
	I0520 04:52:40.187438   20904 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:52:40.187533   20904 fix.go:56] duration metric: took 22.094833ms for fixHost
	I0520 04:52:40.187554   20904 start.go:83] releasing machines lock for "multinode-964000", held for 22.274791ms
	W0520 04:52:40.187695   20904 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-964000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-964000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:52:40.194483   20904 out.go:177] 
	W0520 04:52:40.198478   20904 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:52:40.198503   20904 out.go:239] * 
	* 
	W0520 04:52:40.201237   20904 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:52:40.208416   20904 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-964000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (72.659792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-964000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-964000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-964000-m01 --driver=qemu2 : exit status 80 (9.8760365s)

                                                
                                                
-- stdout --
	* [multinode-964000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-964000-m01" primary control-plane node in "multinode-964000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-964000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-964000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-964000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-964000-m02 --driver=qemu2 : exit status 80 (10.015083416s)

                                                
                                                
-- stdout --
	* [multinode-964000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-964000-m02" primary control-plane node in "multinode-964000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-964000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-964000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-964000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-964000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-964000: exit status 83 (79.59775ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-964000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-964000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-964000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-964000 -n multinode-964000: exit status 7 (30.043875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (9.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-550000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-550000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.765742292s)

                                                
                                                
-- stdout --
	* [test-preload-550000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-550000" primary control-plane node in "test-preload-550000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-550000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:53:00.584824   20964 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:53:00.584958   20964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:00.584961   20964 out.go:304] Setting ErrFile to fd 2...
	I0520 04:53:00.584963   20964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:53:00.585116   20964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:53:00.586209   20964 out.go:298] Setting JSON to false
	I0520 04:53:00.602478   20964 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10351,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:53:00.602538   20964 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:53:00.608635   20964 out.go:177] * [test-preload-550000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:53:00.615592   20964 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:53:00.620596   20964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:53:00.615660   20964 notify.go:220] Checking for updates...
	I0520 04:53:00.623592   20964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:53:00.626532   20964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:53:00.629606   20964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:53:00.632579   20964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:53:00.635973   20964 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:53:00.636029   20964 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:53:00.640529   20964 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:53:00.647517   20964 start.go:297] selected driver: qemu2
	I0520 04:53:00.647524   20964 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:53:00.647530   20964 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:53:00.649788   20964 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:53:00.653603   20964 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:53:00.656609   20964 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:53:00.656623   20964 cni.go:84] Creating CNI manager for ""
	I0520 04:53:00.656631   20964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:53:00.656634   20964 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:53:00.656658   20964 start.go:340] cluster config:
	{Name:test-preload-550000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-550000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:53:00.660894   20964 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.666562   20964 out.go:177] * Starting "test-preload-550000" primary control-plane node in "test-preload-550000" cluster
	I0520 04:53:00.670440   20964 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0520 04:53:00.670535   20964 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/test-preload-550000/config.json ...
	I0520 04:53:00.670558   20964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/test-preload-550000/config.json: {Name:mk9f9d7d80a8bb4a37ed4d6587ef5793ad947e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:53:00.670546   20964 cache.go:107] acquiring lock: {Name:mk95541300b9ab09f76a4eea8dd4c3806294ac6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670564   20964 cache.go:107] acquiring lock: {Name:mka1fd6f06df0b1939f7f40b8aab6a3ac80af40f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670575   20964 cache.go:107] acquiring lock: {Name:mk57b64a398954da4502c956778f36794f1ababa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670713   20964 cache.go:107] acquiring lock: {Name:mk4b450ec12fb44f66ac662457c19639ee185f19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670787   20964 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:53:00.670851   20964 cache.go:107] acquiring lock: {Name:mk63f4fd40e9ef670add8d3d43dc22f61a1074f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670871   20964 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 04:53:00.670880   20964 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:53:00.670879   20964 start.go:360] acquireMachinesLock for test-preload-550000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:00.670880   20964 cache.go:107] acquiring lock: {Name:mk1dc0219b4313a7f5f06f3abbf31c0bccfa56f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670913   20964 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "test-preload-550000"
	I0520 04:53:00.670840   20964 cache.go:107] acquiring lock: {Name:mk7d782f3f511bff44d1d223293e6bf0dac260b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670952   20964 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 04:53:00.670956   20964 cache.go:107] acquiring lock: {Name:mk7f4a97b0f9e9d98006899e487ffce2d004a678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:53:00.670926   20964 start.go:93] Provisioning new machine with config: &{Name:test-preload-550000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-550000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:00.670969   20964 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:00.675606   20964 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:53:00.671028   20964 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 04:53:00.671044   20964 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 04:53:00.671101   20964 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:53:00.671534   20964 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:53:00.680109   20964 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 04:53:00.680115   20964 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:53:00.684861   20964 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 04:53:00.685103   20964 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 04:53:00.685201   20964 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:53:00.685254   20964 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:53:00.685349   20964 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 04:53:00.688388   20964 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:53:00.691837   20964 start.go:159] libmachine.API.Create for "test-preload-550000" (driver="qemu2")
	I0520 04:53:00.691864   20964 client.go:168] LocalClient.Create starting
	I0520 04:53:00.691942   20964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:00.691976   20964 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:00.691990   20964 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:00.692041   20964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:00.692064   20964 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:00.692073   20964 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:00.692508   20964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:00.833862   20964 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:00.937447   20964 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:00.937462   20964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:00.937664   20964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2
	I0520 04:53:00.950687   20964 main.go:141] libmachine: STDOUT: 
	I0520 04:53:00.950707   20964 main.go:141] libmachine: STDERR: 
	I0520 04:53:00.950793   20964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2 +20000M
	I0520 04:53:00.963092   20964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:00.963129   20964 main.go:141] libmachine: STDERR: 
	I0520 04:53:00.963146   20964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2
	I0520 04:53:00.963152   20964 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:00.963201   20964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d6:0a:82:28:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2
	I0520 04:53:00.965199   20964 main.go:141] libmachine: STDOUT: 
	I0520 04:53:00.965225   20964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:00.965244   20964 client.go:171] duration metric: took 273.376583ms to LocalClient.Create
	I0520 04:53:01.027351   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0520 04:53:01.066642   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0520 04:53:01.075797   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:53:01.076543   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0520 04:53:01.128807   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:53:01.181811   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0520 04:53:01.204372   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0520 04:53:01.204392   20964 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 533.697667ms
	I0520 04:53:01.204416   20964 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0520 04:53:01.214613   20964 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:53:01.214683   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0520 04:53:01.408317   20964 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:53:01.408404   20964 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:53:01.656584   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 04:53:01.656646   20964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 986.104625ms
	I0520 04:53:01.656670   20964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 04:53:02.656734   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0520 04:53:02.656779   20964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.9859925s
	I0520 04:53:02.656804   20964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0520 04:53:02.814720   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0520 04:53:02.814767   20964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.143950125s
	I0520 04:53:02.814824   20964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0520 04:53:02.965447   20964 start.go:128] duration metric: took 2.294473708s to createHost
	I0520 04:53:02.965500   20964 start.go:83] releasing machines lock for "test-preload-550000", held for 2.294591041s
	W0520 04:53:02.965543   20964 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:02.974511   20964 out.go:177] * Deleting "test-preload-550000" in qemu2 ...
	W0520 04:53:02.998005   20964 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:02.998033   20964 start.go:728] Will try again in 5 seconds ...
	I0520 04:53:05.499081   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0520 04:53:05.499143   20964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.828213666s
	I0520 04:53:05.499170   20964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0520 04:53:05.815899   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0520 04:53:05.815951   20964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.145430875s
	I0520 04:53:05.815982   20964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0520 04:53:06.048271   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0520 04:53:06.048316   20964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.377797167s
	I0520 04:53:06.048342   20964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0520 04:53:07.998274   20964 start.go:360] acquireMachinesLock for test-preload-550000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:53:07.998689   20964 start.go:364] duration metric: took 345.792µs to acquireMachinesLock for "test-preload-550000"
	I0520 04:53:07.998787   20964 start.go:93] Provisioning new machine with config: &{Name:test-preload-550000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-550000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:53:07.999024   20964 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:53:08.010753   20964 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:53:08.061095   20964 start.go:159] libmachine.API.Create for "test-preload-550000" (driver="qemu2")
	I0520 04:53:08.061130   20964 client.go:168] LocalClient.Create starting
	I0520 04:53:08.061239   20964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:53:08.061309   20964 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:08.061339   20964 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:08.061398   20964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:53:08.061442   20964 main.go:141] libmachine: Decoding PEM data...
	I0520 04:53:08.061456   20964 main.go:141] libmachine: Parsing certificate...
	I0520 04:53:08.061945   20964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:53:08.204961   20964 main.go:141] libmachine: Creating SSH key...
	I0520 04:53:08.255645   20964 main.go:141] libmachine: Creating Disk image...
	I0520 04:53:08.255655   20964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:53:08.255825   20964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2
	I0520 04:53:08.268344   20964 main.go:141] libmachine: STDOUT: 
	I0520 04:53:08.268374   20964 main.go:141] libmachine: STDERR: 
	I0520 04:53:08.268435   20964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2 +20000M
	I0520 04:53:08.279651   20964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:53:08.279673   20964 main.go:141] libmachine: STDERR: 
	I0520 04:53:08.279683   20964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2
	I0520 04:53:08.279699   20964 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:53:08.279736   20964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d9:d4:74:c6:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/test-preload-550000/disk.qcow2
	I0520 04:53:08.281562   20964 main.go:141] libmachine: STDOUT: 
	I0520 04:53:08.281581   20964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:53:08.281600   20964 client.go:171] duration metric: took 220.452333ms to LocalClient.Create
	I0520 04:53:09.693701   20964 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0520 04:53:09.693776   20964 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.023043625s
	I0520 04:53:09.693804   20964 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0520 04:53:09.693887   20964 cache.go:87] Successfully saved all images to host disk.
	I0520 04:53:10.283814   20964 start.go:128] duration metric: took 2.28477s to createHost
	I0520 04:53:10.283897   20964 start.go:83] releasing machines lock for "test-preload-550000", held for 2.285175458s
	W0520 04:53:10.284214   20964 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-550000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-550000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:53:10.293630   20964 out.go:177] 
	W0520 04:53:10.299752   20964 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:53:10.299778   20964 out.go:239] * 
	* 
	W0520 04:53:10.302310   20964 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:53:10.308577   20964 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-550000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-05-20 04:53:10.326928 -0700 PDT m=+671.233791459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-550000 -n test-preload-550000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-550000 -n test-preload-550000: exit status 7 (65.161833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-550000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-550000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-550000
--- FAIL: TestPreload (9.93s)

                                                
                                    
x
+
TestScheduledStopUnix (9.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-155000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-155000 --memory=2048 --driver=qemu2 : exit status 80 (9.709619333s)

                                                
                                                
-- stdout --
	* [scheduled-stop-155000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-155000" primary control-plane node in "scheduled-stop-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-155000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-155000" primary control-plane node in "scheduled-stop-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-05-20 04:53:20.202418 -0700 PDT m=+681.109349001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-155000 -n scheduled-stop-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-155000 -n scheduled-stop-155000: exit status 7 (68.048375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-155000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-155000
--- FAIL: TestScheduledStopUnix (9.88s)

                                                
                                    
x
+
TestSkaffold (11.97s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2193386858 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-643000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-643000 --memory=2600 --driver=qemu2 : exit status 80 (9.722504375s)

                                                
                                                
-- stdout --
	* [skaffold-643000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-643000" primary control-plane node in "skaffold-643000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-643000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-643000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-643000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-643000" primary control-plane node in "skaffold-643000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-643000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-643000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-05-20 04:53:32.179006 -0700 PDT m=+693.086021292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-643000 -n skaffold-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-643000 -n skaffold-643000: exit status 7 (62.939166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-643000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-643000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-643000
--- FAIL: TestSkaffold (11.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (587.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1735361814 start -p running-upgrade-158000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1735361814 start -p running-upgrade-158000 --memory=2200 --vm-driver=qemu2 : (50.780340708s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-158000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-158000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.632861s)

                                                
                                                
-- stdout --
	* [running-upgrade-158000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-158000" primary control-plane node in "running-upgrade-158000" cluster
	* Updating the running qemu2 "running-upgrade-158000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:55:04.596020   21370 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:55:04.596220   21370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:55:04.596223   21370 out.go:304] Setting ErrFile to fd 2...
	I0520 04:55:04.596226   21370 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:55:04.596359   21370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:55:04.597331   21370 out.go:298] Setting JSON to false
	I0520 04:55:04.614795   21370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10475,"bootTime":1716195629,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:55:04.614868   21370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:55:04.620296   21370 out.go:177] * [running-upgrade-158000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:55:04.627300   21370 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:55:04.627367   21370 notify.go:220] Checking for updates...
	I0520 04:55:04.633215   21370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:55:04.636260   21370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:55:04.637621   21370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:55:04.640250   21370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:55:04.643284   21370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:55:04.646553   21370 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:55:04.650205   21370 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 04:55:04.653234   21370 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:55:04.657241   21370 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:55:04.664255   21370 start.go:297] selected driver: qemu2
	I0520 04:55:04.664260   21370 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53952 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:55:04.664309   21370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:55:04.666798   21370 cni.go:84] Creating CNI manager for ""
	I0520 04:55:04.666813   21370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:55:04.666844   21370 start.go:340] cluster config:
	{Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53952 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:55:04.666893   21370 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:55:04.674280   21370 out.go:177] * Starting "running-upgrade-158000" primary control-plane node in "running-upgrade-158000" cluster
	I0520 04:55:04.678244   21370 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:55:04.678256   21370 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 04:55:04.678265   21370 cache.go:56] Caching tarball of preloaded images
	I0520 04:55:04.678311   21370 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:55:04.678322   21370 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 04:55:04.678367   21370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/config.json ...
	I0520 04:55:04.678778   21370 start.go:360] acquireMachinesLock for running-upgrade-158000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:55:04.678808   21370 start.go:364] duration metric: took 23.333µs to acquireMachinesLock for "running-upgrade-158000"
	I0520 04:55:04.678818   21370 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:55:04.678824   21370 fix.go:54] fixHost starting: 
	I0520 04:55:04.679478   21370 fix.go:112] recreateIfNeeded on running-upgrade-158000: state=Running err=<nil>
	W0520 04:55:04.679485   21370 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:55:04.682229   21370 out.go:177] * Updating the running qemu2 "running-upgrade-158000" VM ...
	I0520 04:55:04.689114   21370 machine.go:94] provisionDockerMachine start ...
	I0520 04:55:04.689150   21370 main.go:141] libmachine: Using SSH client type: native
	I0520 04:55:04.689245   21370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d46900] 0x102d49160 <nil>  [] 0s} localhost 53920 <nil> <nil>}
	I0520 04:55:04.689250   21370 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:55:04.742436   21370 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-158000
	
	I0520 04:55:04.742448   21370 buildroot.go:166] provisioning hostname "running-upgrade-158000"
	I0520 04:55:04.742506   21370 main.go:141] libmachine: Using SSH client type: native
	I0520 04:55:04.742624   21370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d46900] 0x102d49160 <nil>  [] 0s} localhost 53920 <nil> <nil>}
	I0520 04:55:04.742630   21370 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-158000 && echo "running-upgrade-158000" | sudo tee /etc/hostname
	I0520 04:55:04.793138   21370 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-158000
	
	I0520 04:55:04.793180   21370 main.go:141] libmachine: Using SSH client type: native
	I0520 04:55:04.793276   21370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d46900] 0x102d49160 <nil>  [] 0s} localhost 53920 <nil> <nil>}
	I0520 04:55:04.793337   21370 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-158000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-158000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-158000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:55:04.844337   21370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:55:04.844348   21370 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18929-19024/.minikube CaCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18929-19024/.minikube}
	I0520 04:55:04.844362   21370 buildroot.go:174] setting up certificates
	I0520 04:55:04.844367   21370 provision.go:84] configureAuth start
	I0520 04:55:04.844372   21370 provision.go:143] copyHostCerts
	I0520 04:55:04.844422   21370 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem, removing ...
	I0520 04:55:04.844427   21370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem
	I0520 04:55:04.844548   21370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem (1082 bytes)
	I0520 04:55:04.844722   21370 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem, removing ...
	I0520 04:55:04.844726   21370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem
	I0520 04:55:04.844770   21370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem (1123 bytes)
	I0520 04:55:04.844873   21370 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem, removing ...
	I0520 04:55:04.844877   21370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem
	I0520 04:55:04.844915   21370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem (1675 bytes)
	I0520 04:55:04.845000   21370 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-158000 san=[127.0.0.1 localhost minikube running-upgrade-158000]
	I0520 04:55:04.933708   21370 provision.go:177] copyRemoteCerts
	I0520 04:55:04.933745   21370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:55:04.933752   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0520 04:55:04.960087   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 04:55:04.966945   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 04:55:04.973443   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 04:55:04.980633   21370 provision.go:87] duration metric: took 136.262375ms to configureAuth
	I0520 04:55:04.980642   21370 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:55:04.980747   21370 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:55:04.980779   21370 main.go:141] libmachine: Using SSH client type: native
	I0520 04:55:04.980872   21370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d46900] 0x102d49160 <nil>  [] 0s} localhost 53920 <nil> <nil>}
	I0520 04:55:04.980877   21370 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:55:05.029332   21370 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:55:05.029339   21370 buildroot.go:70] root file system type: tmpfs
	I0520 04:55:05.029392   21370 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:55:05.029445   21370 main.go:141] libmachine: Using SSH client type: native
	I0520 04:55:05.029553   21370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d46900] 0x102d49160 <nil>  [] 0s} localhost 53920 <nil> <nil>}
	I0520 04:55:05.029585   21370 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:55:05.080974   21370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:55:05.081047   21370 main.go:141] libmachine: Using SSH client type: native
	I0520 04:55:05.081149   21370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d46900] 0x102d49160 <nil>  [] 0s} localhost 53920 <nil> <nil>}
	I0520 04:55:05.081160   21370 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:55:05.133425   21370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:55:05.133435   21370 machine.go:97] duration metric: took 444.319417ms to provisionDockerMachine
	I0520 04:55:05.133441   21370 start.go:293] postStartSetup for "running-upgrade-158000" (driver="qemu2")
	I0520 04:55:05.133446   21370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:55:05.133494   21370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:55:05.133503   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0520 04:55:05.160164   21370 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:55:05.161644   21370 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 04:55:05.161651   21370 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18929-19024/.minikube/addons for local assets ...
	I0520 04:55:05.161714   21370 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18929-19024/.minikube/files for local assets ...
	I0520 04:55:05.161805   21370 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem -> 195172.pem in /etc/ssl/certs
	I0520 04:55:05.161913   21370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:55:05.164837   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem --> /etc/ssl/certs/195172.pem (1708 bytes)
	I0520 04:55:05.171764   21370 start.go:296] duration metric: took 38.319166ms for postStartSetup
	I0520 04:55:05.171777   21370 fix.go:56] duration metric: took 492.957834ms for fixHost
	I0520 04:55:05.171807   21370 main.go:141] libmachine: Using SSH client type: native
	I0520 04:55:05.171912   21370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d46900] 0x102d49160 <nil>  [] 0s} localhost 53920 <nil> <nil>}
	I0520 04:55:05.171916   21370 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 04:55:05.221318   21370 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206104.801816847
	
	I0520 04:55:05.221328   21370 fix.go:216] guest clock: 1716206104.801816847
	I0520 04:55:05.221332   21370 fix.go:229] Guest: 2024-05-20 04:55:04.801816847 -0700 PDT Remote: 2024-05-20 04:55:05.171779 -0700 PDT m=+0.595371584 (delta=-369.962153ms)
	I0520 04:55:05.221344   21370 fix.go:200] guest clock delta is within tolerance: -369.962153ms
	I0520 04:55:05.221347   21370 start.go:83] releasing machines lock for "running-upgrade-158000", held for 542.53825ms
	I0520 04:55:05.221405   21370 ssh_runner.go:195] Run: cat /version.json
	I0520 04:55:05.221415   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0520 04:55:05.221405   21370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:55:05.221446   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	W0520 04:55:05.221977   21370 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53920: connect: connection refused
	I0520 04:55:05.222001   21370 retry.go:31] will retry after 228.731974ms: dial tcp [::1]:53920: connect: connection refused
	W0520 04:55:05.247291   21370 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 04:55:05.247337   21370 ssh_runner.go:195] Run: systemctl --version
	I0520 04:55:05.249155   21370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 04:55:05.250788   21370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:55:05.250809   21370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 04:55:05.253560   21370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 04:55:05.258150   21370 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:55:05.258156   21370 start.go:494] detecting cgroup driver to use...
	I0520 04:55:05.258268   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:55:05.263358   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 04:55:05.266289   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:55:05.269341   21370 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:55:05.269364   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:55:05.272322   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:55:05.275617   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:55:05.278408   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:55:05.281301   21370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:55:05.284293   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:55:05.287196   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:55:05.289877   21370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:55:05.292846   21370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:55:05.297324   21370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:55:05.300138   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:55:05.382800   21370 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:55:05.389212   21370 start.go:494] detecting cgroup driver to use...
	I0520 04:55:05.389273   21370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:55:05.397304   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:55:05.402947   21370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:55:05.412025   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:55:05.416469   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:55:05.420930   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:55:05.426399   21370 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:55:05.427720   21370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:55:05.430303   21370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:55:05.435182   21370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:55:05.523947   21370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:55:05.629295   21370 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:55:05.629357   21370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:55:05.635077   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:55:05.732846   21370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:55:08.588486   21370 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.85564425s)
	I0520 04:55:08.588558   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:55:08.593517   21370 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 04:55:08.599976   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:55:08.604572   21370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:55:08.698366   21370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:55:08.776344   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:55:08.856077   21370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:55:08.862064   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:55:08.867040   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:55:08.944098   21370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:55:08.982748   21370 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:55:08.982824   21370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:55:08.985678   21370 start.go:562] Will wait 60s for crictl version
	I0520 04:55:08.985728   21370 ssh_runner.go:195] Run: which crictl
	I0520 04:55:08.987268   21370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:55:08.998461   21370 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 04:55:08.998530   21370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:55:09.010799   21370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:55:09.031168   21370 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 04:55:09.031295   21370 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 04:55:09.032715   21370 kubeadm.go:877] updating cluster {Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53952 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 04:55:09.032762   21370 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:55:09.032811   21370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:55:09.043221   21370 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:55:09.043229   21370 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:55:09.043276   21370 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:55:09.046088   21370 ssh_runner.go:195] Run: which lz4
	I0520 04:55:09.047262   21370 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 04:55:09.048414   21370 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 04:55:09.048422   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 04:55:09.770321   21370 docker.go:649] duration metric: took 723.091292ms to copy over tarball
	I0520 04:55:09.770376   21370 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:55:11.010932   21370 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.240545875s)
	I0520 04:55:11.010947   21370 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:55:11.026203   21370 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:55:11.029495   21370 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 04:55:11.034742   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:55:11.121823   21370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:55:12.343723   21370 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.221890125s)
	I0520 04:55:12.343810   21370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:55:12.359492   21370 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:55:12.359501   21370 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:55:12.359506   21370 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 04:55:12.365874   21370 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:55:12.365874   21370 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:55:12.365905   21370 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:55:12.365965   21370 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:55:12.365976   21370 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:55:12.366020   21370 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:55:12.366186   21370 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:55:12.366620   21370 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:55:12.373936   21370 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:55:12.374048   21370 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:55:12.374304   21370 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:55:12.374528   21370 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:55:12.376662   21370 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:55:12.376785   21370 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:55:12.376807   21370 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:55:12.376869   21370 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:55:12.741572   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:55:12.754283   21370 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 04:55:12.754303   21370 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:55:12.754354   21370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:55:12.755568   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 04:55:12.765027   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 04:55:12.774623   21370 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 04:55:12.774640   21370 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 04:55:12.774699   21370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 04:55:12.784120   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:55:12.787347   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:55:12.787436   21370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0520 04:55:12.791252   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:55:12.802060   21370 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 04:55:12.802058   21370 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 04:55:12.802085   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 04:55:12.802095   21370 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:55:12.802135   21370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:55:12.804016   21370 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 04:55:12.804030   21370 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:55:12.804061   21370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:55:12.814209   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0520 04:55:12.814632   21370 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:55:12.814763   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:55:12.818722   21370 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 04:55:12.818732   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 04:55:12.820585   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 04:55:12.821371   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:55:12.830045   21370 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 04:55:12.830072   21370 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:55:12.830121   21370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:55:12.852242   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 04:55:12.869248   21370 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 04:55:12.869271   21370 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 04:55:12.869286   21370 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:55:12.869326   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 04:55:12.869340   21370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:55:12.869347   21370 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 04:55:12.869358   21370 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:55:12.869381   21370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 04:55:12.869417   21370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:55:12.870947   21370 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 04:55:12.870959   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 04:55:12.904305   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:55:12.904320   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 04:55:12.904426   21370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:55:12.915192   21370 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 04:55:12.915217   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 04:55:12.920720   21370 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:55:12.920731   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 04:55:13.032355   21370 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0520 04:55:13.085557   21370 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:55:13.085663   21370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:55:13.107077   21370 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 04:55:13.107108   21370 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:55:13.107177   21370 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:55:13.128338   21370 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:55:13.128352   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0520 04:55:14.490259   21370 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.361885958s)
	I0520 04:55:14.490259   21370 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.38305875s)
	I0520 04:55:14.490321   21370 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:55:14.490305   21370 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0520 04:55:14.490760   21370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:55:14.496656   21370 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 04:55:14.496720   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 04:55:14.550734   21370 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:55:14.550746   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 04:55:14.788043   21370 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 04:55:14.788076   21370 cache_images.go:92] duration metric: took 2.428580875s to LoadCachedImages
	W0520 04:55:14.788112   21370 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0520 04:55:14.788117   21370 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 04:55:14.788182   21370 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-158000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:55:14.788244   21370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:55:14.801233   21370 cni.go:84] Creating CNI manager for ""
	I0520 04:55:14.801245   21370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:55:14.801254   21370 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:55:14.801262   21370 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-158000 NodeName:running-upgrade-158000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:55:14.801332   21370 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-158000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:55:14.801382   21370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 04:55:14.804850   21370 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:55:14.804883   21370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 04:55:14.808049   21370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 04:55:14.812889   21370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:55:14.817987   21370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 04:55:14.823405   21370 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 04:55:14.824808   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:55:14.895798   21370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:55:14.900909   21370 certs.go:68] Setting up /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000 for IP: 10.0.2.15
	I0520 04:55:14.900915   21370 certs.go:194] generating shared ca certs ...
	I0520 04:55:14.900924   21370 certs.go:226] acquiring lock for ca certs: {Name:mk319383c68f33c5310e8442d826dee5d3ed7b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:55:14.901165   21370 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.key
	I0520 04:55:14.901198   21370 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.key
	I0520 04:55:14.901203   21370 certs.go:256] generating profile certs ...
	I0520 04:55:14.901257   21370 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/client.key
	I0520 04:55:14.901269   21370 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017
	I0520 04:55:14.901281   21370 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 04:55:14.942897   21370 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017 ...
	I0520 04:55:14.942902   21370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017: {Name:mk088fccbee0757d4e09b6f33c51043fb6cf664d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:55:14.943124   21370 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017 ...
	I0520 04:55:14.943129   21370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017: {Name:mkba09795579dcfb5e2d13bee5e46d8ee542250b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:55:14.943243   21370 certs.go:381] copying /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.crt.54a47017 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.crt
	I0520 04:55:14.943869   21370 certs.go:385] copying /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.key.54a47017 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.key
	I0520 04:55:14.944011   21370 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/proxy-client.key
	I0520 04:55:14.944142   21370 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517.pem (1338 bytes)
	W0520 04:55:14.944165   21370 certs.go:480] ignoring /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517_empty.pem, impossibly tiny 0 bytes
	I0520 04:55:14.944169   21370 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 04:55:14.944187   21370 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem (1082 bytes)
	I0520 04:55:14.944204   21370 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem (1123 bytes)
	I0520 04:55:14.944223   21370 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem (1675 bytes)
	I0520 04:55:14.944259   21370 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem (1708 bytes)
	I0520 04:55:14.944605   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:55:14.952734   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:55:14.960540   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:55:14.968168   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:55:14.975181   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 04:55:14.981670   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:55:14.988932   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:55:14.996546   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 04:55:15.004268   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517.pem --> /usr/share/ca-certificates/19517.pem (1338 bytes)
	I0520 04:55:15.011298   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem --> /usr/share/ca-certificates/195172.pem (1708 bytes)
	I0520 04:55:15.018127   21370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:55:15.025140   21370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:55:15.030164   21370 ssh_runner.go:195] Run: openssl version
	I0520 04:55:15.032077   21370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19517.pem && ln -fs /usr/share/ca-certificates/19517.pem /etc/ssl/certs/19517.pem"
	I0520 04:55:15.035282   21370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19517.pem
	I0520 04:55:15.036760   21370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 11:42 /usr/share/ca-certificates/19517.pem
	I0520 04:55:15.036776   21370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19517.pem
	I0520 04:55:15.038837   21370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19517.pem /etc/ssl/certs/51391683.0"
	I0520 04:55:15.041596   21370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195172.pem && ln -fs /usr/share/ca-certificates/195172.pem /etc/ssl/certs/195172.pem"
	I0520 04:55:15.045072   21370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195172.pem
	I0520 04:55:15.046682   21370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 11:42 /usr/share/ca-certificates/195172.pem
	I0520 04:55:15.046703   21370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195172.pem
	I0520 04:55:15.048402   21370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195172.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:55:15.051152   21370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:55:15.053978   21370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:55:15.055606   21370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:54 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:55:15.055625   21370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:55:15.057316   21370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:55:15.060577   21370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:55:15.062142   21370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 04:55:15.063985   21370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 04:55:15.065803   21370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 04:55:15.067816   21370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 04:55:15.069871   21370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 04:55:15.071745   21370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 04:55:15.073433   21370 kubeadm.go:391] StartCluster: {Name:running-upgrade-158000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53952 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-158000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:55:15.073502   21370 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:55:15.083961   21370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 04:55:15.087903   21370 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 04:55:15.087910   21370 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 04:55:15.087913   21370 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 04:55:15.087931   21370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 04:55:15.091146   21370 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:55:15.091186   21370 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-158000" does not appear in /Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:55:15.091200   21370 kubeconfig.go:62] /Users/jenkins/minikube-integration/18929-19024/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-158000" cluster setting kubeconfig missing "running-upgrade-158000" context setting]
	I0520 04:55:15.091356   21370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/kubeconfig: {Name:mk3ada957134ebfd6ba10dc19bcfe4b23657e56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:55:15.092068   21370 kapi.go:59] client config for running-upgrade-158000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/client.key", CAFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040d0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:55:15.092856   21370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 04:55:15.095645   21370 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-158000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 04:55:15.095650   21370 kubeadm.go:1154] stopping kube-system containers ...
	I0520 04:55:15.095691   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:55:15.106907   21370 docker.go:483] Stopping containers: [9c0379f379fe ae1e27bb7cba b31195db0873 cf7debe9745c dc7f1ac48726 3e8334495368 3a2f8a16d941 d2024ccaf41a 317e103732b9 87218e8ecbeb af99c8353736 f8f6675b8bfa e054c091355a 173493920f9c]
	I0520 04:55:15.106979   21370 ssh_runner.go:195] Run: docker stop 9c0379f379fe ae1e27bb7cba b31195db0873 cf7debe9745c dc7f1ac48726 3e8334495368 3a2f8a16d941 d2024ccaf41a 317e103732b9 87218e8ecbeb af99c8353736 f8f6675b8bfa e054c091355a 173493920f9c
	I0520 04:55:15.118231   21370 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 04:55:15.216333   21370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:55:15.220458   21370 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 May 20 11:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 May 20 11:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May 20 11:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 May 20 11:54 /etc/kubernetes/scheduler.conf
	
	I0520 04:55:15.220495   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/admin.conf
	I0520 04:55:15.223634   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:55:15.223660   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:55:15.226371   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/kubelet.conf
	I0520 04:55:15.229112   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:55:15.229135   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:55:15.232181   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/controller-manager.conf
	I0520 04:55:15.234551   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:55:15.234577   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:55:15.237919   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/scheduler.conf
	I0520 04:55:15.240943   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:55:15.240966   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:55:15.243814   21370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:55:15.246625   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:55:15.268504   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:55:15.993854   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:55:16.308164   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:55:16.330498   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:55:16.353154   21370 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:55:16.353235   21370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:55:16.855670   21370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:55:17.355282   21370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:55:17.359334   21370 api_server.go:72] duration metric: took 1.006188958s to wait for apiserver process to appear ...
	I0520 04:55:17.359342   21370 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:55:17.359350   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:22.361500   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:22.361599   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:27.362575   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:27.362657   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:32.363631   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:32.363666   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:37.364585   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:37.364644   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:42.366677   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:42.366756   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:47.367921   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:47.368010   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:52.369644   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:52.369736   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:55:57.372392   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:55:57.372472   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:02.375117   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:02.375202   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:07.377827   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:07.377946   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:12.380558   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:12.380633   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:17.383156   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:17.383369   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:56:17.406533   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:56:17.406623   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:56:17.423232   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:56:17.423291   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:56:17.434184   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:56:17.434238   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:56:17.444799   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:56:17.444858   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:56:17.455025   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:56:17.455094   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:56:17.465384   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:56:17.465454   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:56:17.475575   21370 logs.go:276] 0 containers: []
	W0520 04:56:17.475586   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:56:17.475638   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:56:17.486056   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:56:17.486072   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:56:17.486077   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:56:17.502531   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:56:17.502544   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:56:17.522551   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:56:17.522563   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:56:17.534173   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:56:17.534184   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:56:17.559373   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:56:17.559383   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:56:17.573738   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:56:17.573751   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:56:17.585561   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:56:17.585572   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:56:17.611978   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:56:17.611989   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:56:17.623357   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:56:17.623368   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:56:17.637221   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:56:17.637233   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:56:17.654329   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:56:17.654340   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:56:17.668980   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:56:17.668992   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:56:17.681086   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:56:17.681098   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:56:17.715639   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:56:17.715648   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:56:17.788183   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:56:17.788196   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:56:17.799860   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:56:17.799873   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:56:17.804065   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:56:17.804074   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:56:20.321168   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:25.323431   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:25.323672   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:56:25.352801   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:56:25.352902   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:56:25.369760   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:56:25.369847   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:56:25.389167   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:56:25.389241   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:56:25.404242   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:56:25.404304   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:56:25.414588   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:56:25.414665   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:56:25.425064   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:56:25.425132   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:56:25.435043   21370 logs.go:276] 0 containers: []
	W0520 04:56:25.435053   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:56:25.435102   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:56:25.444904   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:56:25.444922   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:56:25.444927   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:56:25.480954   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:56:25.480961   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:56:25.494604   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:56:25.494617   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:56:25.519286   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:56:25.519298   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:56:25.533114   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:56:25.533125   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:56:25.547380   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:56:25.547394   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:56:25.559749   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:56:25.559762   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:56:25.570624   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:56:25.570639   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:56:25.597002   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:56:25.597013   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:56:25.612414   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:56:25.612423   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:56:25.624660   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:56:25.624670   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:56:25.635942   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:56:25.635951   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:56:25.647395   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:56:25.647405   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:56:25.660580   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:56:25.660595   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:56:25.665087   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:56:25.665098   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:56:25.700213   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:56:25.700228   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:56:25.711338   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:56:25.711349   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:56:28.235936   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:33.238495   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:33.239011   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:56:33.277381   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:56:33.277523   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:56:33.298994   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:56:33.299114   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:56:33.315005   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:56:33.315088   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:56:33.330711   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:56:33.330786   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:56:33.343548   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:56:33.343620   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:56:33.354531   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:56:33.354594   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:56:33.364900   21370 logs.go:276] 0 containers: []
	W0520 04:56:33.364912   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:56:33.364972   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:56:33.374886   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:56:33.374905   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:56:33.374911   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:56:33.391850   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:56:33.391862   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:56:33.405625   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:56:33.405638   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:56:33.420129   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:56:33.420138   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:56:33.435549   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:56:33.435559   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:56:33.453159   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:56:33.453172   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:56:33.479270   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:56:33.479279   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:56:33.513393   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:56:33.513402   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:56:33.538017   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:56:33.538027   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:56:33.549320   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:56:33.549333   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:56:33.560387   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:56:33.560401   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:56:33.571762   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:56:33.571775   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:56:33.582923   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:56:33.582935   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:56:33.594235   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:56:33.594248   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:56:33.598760   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:56:33.598766   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:56:33.633114   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:56:33.633126   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:56:33.647433   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:56:33.647444   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:56:36.161331   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:41.164174   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:41.164675   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:56:41.195463   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:56:41.195599   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:56:41.214686   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:56:41.214775   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:56:41.228571   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:56:41.228643   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:56:41.240349   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:56:41.240424   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:56:41.250647   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:56:41.250716   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:56:41.261160   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:56:41.261233   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:56:41.271236   21370 logs.go:276] 0 containers: []
	W0520 04:56:41.271247   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:56:41.271304   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:56:41.283886   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:56:41.283907   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:56:41.283912   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:56:41.288119   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:56:41.288127   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:56:41.303453   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:56:41.303461   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:56:41.314893   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:56:41.314904   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:56:41.325938   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:56:41.325949   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:56:41.339648   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:56:41.339657   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:56:41.353617   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:56:41.353625   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:56:41.366971   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:56:41.366981   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:56:41.378054   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:56:41.378067   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:56:41.395823   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:56:41.395832   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:56:41.408969   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:56:41.408980   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:56:41.434423   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:56:41.434436   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:56:41.446393   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:56:41.446401   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:56:41.482661   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:56:41.482669   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:56:41.515996   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:56:41.516007   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:56:41.540927   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:56:41.540936   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:56:41.552135   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:56:41.552145   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:56:44.065511   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:49.067103   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:49.067563   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:56:49.110616   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:56:49.110753   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:56:49.131553   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:56:49.131668   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:56:49.145970   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:56:49.146045   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:56:49.158340   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:56:49.158418   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:56:49.173872   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:56:49.173941   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:56:49.184254   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:56:49.184316   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:56:49.194537   21370 logs.go:276] 0 containers: []
	W0520 04:56:49.194547   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:56:49.194602   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:56:49.216036   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:56:49.216056   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:56:49.216061   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:56:49.250625   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:56:49.250635   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:56:49.275649   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:56:49.275659   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:56:49.288339   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:56:49.288349   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:56:49.312281   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:56:49.312291   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:56:49.326453   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:56:49.326467   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:56:49.338625   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:56:49.338639   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:56:49.355097   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:56:49.355107   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:56:49.367198   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:56:49.367211   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:56:49.371552   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:56:49.371561   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:56:49.385679   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:56:49.385689   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:56:49.400317   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:56:49.400331   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:56:49.412167   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:56:49.412181   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:56:49.423077   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:56:49.423087   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:56:49.456925   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:56:49.456937   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:56:49.470991   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:56:49.471002   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:56:49.482533   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:56:49.482545   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:56:52.003153   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:56:57.005886   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:56:57.006223   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:56:57.039136   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:56:57.039280   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:56:57.059395   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:56:57.059505   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:56:57.073318   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:56:57.073397   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:56:57.084859   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:56:57.084930   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:56:57.095352   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:56:57.095411   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:56:57.109450   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:56:57.109518   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:56:57.119514   21370 logs.go:276] 0 containers: []
	W0520 04:56:57.119525   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:56:57.119579   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:56:57.130034   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:56:57.130062   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:56:57.130068   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:56:57.167144   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:56:57.167157   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:56:57.181559   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:56:57.181568   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:56:57.193571   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:56:57.193584   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:56:57.205107   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:56:57.205119   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:56:57.217525   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:56:57.217535   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:56:57.222315   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:56:57.222325   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:56:57.246788   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:56:57.246800   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:56:57.260766   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:56:57.260776   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:56:57.271883   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:56:57.271896   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:56:57.283213   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:56:57.283223   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:56:57.307140   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:56:57.307146   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:56:57.341728   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:56:57.341736   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:56:57.356377   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:56:57.356387   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:56:57.367330   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:56:57.367343   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:56:57.382943   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:56:57.382953   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:56:57.400351   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:56:57.400362   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:56:59.914255   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:57:04.916896   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:57:04.917215   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:57:04.951865   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:57:04.952015   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:57:04.970043   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:57:04.970149   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:57:04.984137   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:57:04.984207   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:57:04.995603   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:57:04.995669   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:57:05.005835   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:57:05.005898   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:57:05.016394   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:57:05.016455   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:57:05.026534   21370 logs.go:276] 0 containers: []
	W0520 04:57:05.026545   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:57:05.026601   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:57:05.037060   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:57:05.037076   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:57:05.037082   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:57:05.071097   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:57:05.071108   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:57:05.095650   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:57:05.095661   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:57:05.106958   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:57:05.106969   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:57:05.122568   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:57:05.122580   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:57:05.137784   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:57:05.137793   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:57:05.162932   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:57:05.162942   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:57:05.196909   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:57:05.196916   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:57:05.210146   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:57:05.210157   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:57:05.228769   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:57:05.228780   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:57:05.240792   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:57:05.240806   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:57:05.252631   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:57:05.252639   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:57:05.256893   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:57:05.256900   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:57:05.267489   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:57:05.267504   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:57:05.281518   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:57:05.281527   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:57:05.293419   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:57:05.293432   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:57:05.311012   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:57:05.311025   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:57:07.822913   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:57:12.825120   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:57:12.825182   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:57:12.837576   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:57:12.837662   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:57:12.849087   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:57:12.849158   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:57:12.860819   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:57:12.860885   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:57:12.873046   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:57:12.873104   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:57:12.887822   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:57:12.887879   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:57:12.899730   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:57:12.899797   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:57:12.911563   21370 logs.go:276] 0 containers: []
	W0520 04:57:12.911575   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:57:12.911621   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:57:12.922864   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:57:12.922886   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:57:12.922892   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:57:12.958358   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:57:12.958368   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:57:12.970204   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:57:12.970213   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:57:12.974520   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:57:12.974526   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:57:13.010324   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:57:13.010334   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:57:13.028336   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:57:13.028348   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:57:13.041006   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:57:13.041019   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:57:13.057182   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:57:13.057196   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:57:13.070348   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:57:13.070361   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:57:13.097920   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:57:13.097940   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:57:13.113534   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:57:13.113547   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:57:13.127189   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:57:13.127202   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:57:13.146620   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:57:13.146641   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:57:13.173396   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:57:13.173416   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:57:13.189262   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:57:13.189270   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:57:13.210370   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:57:13.210385   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:57:13.226156   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:57:13.226168   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:57:15.740424   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:57:20.741145   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:57:20.741278   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:57:20.768775   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:57:20.768843   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:57:20.781046   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:57:20.781120   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:57:20.793332   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:57:20.793406   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:57:20.807998   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:57:20.808078   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:57:20.820525   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:57:20.820610   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:57:20.833680   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:57:20.833755   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:57:20.847303   21370 logs.go:276] 0 containers: []
	W0520 04:57:20.847313   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:57:20.847368   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:57:20.859181   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:57:20.859200   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:57:20.859206   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:57:20.875933   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:57:20.875948   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:57:20.894625   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:57:20.894635   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:57:20.908037   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:57:20.908048   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:57:20.943908   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:57:20.943917   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:57:20.960309   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:57:20.960318   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:57:20.986469   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:57:20.986478   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:57:21.001440   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:57:21.001452   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:57:21.017021   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:57:21.017033   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:57:21.021759   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:57:21.021767   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:57:21.057021   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:57:21.057035   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:57:21.071241   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:57:21.071254   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:57:21.086615   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:57:21.086628   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:57:21.097986   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:57:21.098000   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:57:21.109664   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:57:21.109677   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:57:21.122378   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:57:21.122389   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:57:21.134599   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:57:21.134613   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:57:23.662512   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:57:28.665225   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:57:28.665566   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:57:28.701174   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:57:28.701302   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:57:28.720746   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:57:28.720849   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:57:28.735378   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:57:28.735459   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:57:28.747809   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:57:28.747881   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:57:28.761301   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:57:28.761371   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:57:28.772276   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:57:28.772342   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:57:28.782338   21370 logs.go:276] 0 containers: []
	W0520 04:57:28.782350   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:57:28.782407   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:57:28.793157   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:57:28.793176   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:57:28.793182   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:57:28.806091   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:57:28.806104   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:57:28.843999   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:57:28.844009   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:57:28.858592   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:57:28.858603   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:57:28.874057   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:57:28.874069   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:57:28.886283   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:57:28.886296   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:57:28.912052   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:57:28.912065   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:57:28.925230   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:57:28.925241   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:57:28.949361   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:57:28.949372   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:57:28.963352   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:57:28.963367   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:57:28.980218   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:57:28.980234   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:57:28.991286   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:57:28.991296   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:57:29.005153   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:57:29.005161   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:57:29.015968   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:57:29.015979   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:57:29.031166   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:57:29.031179   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:57:29.066056   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:57:29.066065   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:57:29.070525   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:57:29.070532   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:57:31.582705   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:57:36.584509   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:57:36.584953   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:57:36.624588   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:57:36.624719   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:57:36.644160   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:57:36.644257   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:57:36.661945   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:57:36.662023   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:57:36.672995   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:57:36.673069   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:57:36.687539   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:57:36.687607   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:57:36.697604   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:57:36.697669   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:57:36.709089   21370 logs.go:276] 0 containers: []
	W0520 04:57:36.709102   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:57:36.709157   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:57:36.720066   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:57:36.720093   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:57:36.720099   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:57:36.757025   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:57:36.757035   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:57:36.761555   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:57:36.761561   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:57:36.775893   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:57:36.775903   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:57:36.787929   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:57:36.787940   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:57:36.808719   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:57:36.808731   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:57:36.820499   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:57:36.820513   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:57:36.855161   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:57:36.855173   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:57:36.887045   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:57:36.887057   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:57:36.901454   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:57:36.901467   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:57:36.917428   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:57:36.917442   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:57:36.942392   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:57:36.942400   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:57:36.955613   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:57:36.955625   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:57:36.967609   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:57:36.967624   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:57:36.979077   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:57:36.979090   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:57:36.997093   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:57:36.997103   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:57:37.014373   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:57:37.014385   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:57:39.526985   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:57:44.529188   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:57:44.529312   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:57:44.543084   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:57:44.543166   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:57:44.555563   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:57:44.555647   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:57:44.567983   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:57:44.568068   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:57:44.584356   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:57:44.584439   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:57:44.600470   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:57:44.601287   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:57:44.613739   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:57:44.613816   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:57:44.626304   21370 logs.go:276] 0 containers: []
	W0520 04:57:44.626316   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:57:44.626377   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:57:44.640545   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:57:44.640567   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:57:44.640573   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:57:44.680996   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:57:44.681014   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:57:44.686282   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:57:44.686295   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:57:44.713380   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:57:44.713413   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:57:44.729741   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:57:44.729755   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:57:44.742946   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:57:44.742959   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:57:44.769090   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:57:44.769112   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:57:44.805352   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:57:44.805364   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:57:44.817537   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:57:44.817549   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:57:44.830003   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:57:44.830015   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:57:44.841473   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:57:44.841482   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:57:44.856149   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:57:44.856162   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:57:44.871848   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:57:44.871858   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:57:44.884445   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:57:44.884454   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:57:44.899200   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:57:44.899212   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:57:44.923485   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:57:44.923501   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:57:44.941790   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:57:44.941804   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:57:47.456258   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:57:52.458509   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:57:52.458747   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:57:52.484725   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:57:52.484833   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:57:52.511017   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:57:52.511102   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:57:52.524807   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:57:52.524872   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:57:52.535874   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:57:52.535945   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:57:52.546577   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:57:52.546647   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:57:52.557024   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:57:52.557100   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:57:52.567843   21370 logs.go:276] 0 containers: []
	W0520 04:57:52.567855   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:57:52.567915   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:57:52.577995   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:57:52.578013   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:57:52.578019   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:57:52.591700   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:57:52.591710   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:57:52.605833   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:57:52.605844   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:57:52.619424   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:57:52.619435   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:57:52.643248   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:57:52.643259   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:57:52.654860   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:57:52.654873   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:57:52.672764   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:57:52.672777   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:57:52.684406   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:57:52.684419   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:57:52.695386   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:57:52.695396   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:57:52.707103   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:57:52.707117   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:57:52.743478   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:57:52.743487   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:57:52.777735   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:57:52.777748   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:57:52.788814   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:57:52.788824   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:57:52.804537   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:57:52.804549   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:57:52.816158   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:57:52.816172   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:57:52.820770   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:57:52.820780   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:57:52.833442   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:57:52.833456   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:57:55.358927   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:00.360252   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:00.360400   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:00.372588   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:00.372667   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:00.391165   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:00.391231   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:00.404197   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:00.404265   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:00.414671   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:00.414749   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:00.425674   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:00.425742   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:00.439912   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:00.439981   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:00.450455   21370 logs.go:276] 0 containers: []
	W0520 04:58:00.450467   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:00.450528   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:00.461237   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:00.461254   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:00.461260   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:00.496333   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:00.496344   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:00.508650   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:00.508662   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:00.520805   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:00.520819   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:00.545099   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:00.545109   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:00.580937   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:00.580944   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:00.592637   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:00.592648   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:00.609914   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:00.609925   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:00.622236   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:00.622247   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:00.637715   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:00.637725   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:00.656586   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:00.656597   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:00.668742   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:00.668754   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:00.680979   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:00.680990   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:00.694921   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:00.694932   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:00.709562   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:00.709572   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:00.734636   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:00.734646   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:00.750338   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:00.750348   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:03.256771   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:08.259080   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:08.259193   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:08.270174   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:08.270246   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:08.281204   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:08.281283   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:08.295853   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:08.295928   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:08.307298   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:08.307374   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:08.318136   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:08.318206   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:08.328706   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:08.328777   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:08.339041   21370 logs.go:276] 0 containers: []
	W0520 04:58:08.339052   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:08.339108   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:08.351464   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:08.351481   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:08.351487   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:08.387257   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:08.387271   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:08.413876   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:08.413891   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:08.426049   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:08.426062   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:08.438373   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:08.438384   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:08.450500   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:08.450514   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:08.475688   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:08.475703   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:08.480972   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:08.480980   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:08.495379   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:08.495390   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:08.510168   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:08.510183   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:08.525900   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:08.525912   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:08.545334   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:08.545353   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:08.559924   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:08.559938   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:08.596268   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:08.596284   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:08.611347   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:08.611363   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:08.627353   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:08.627368   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:08.639550   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:08.639564   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:11.153815   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:16.156186   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:16.156604   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:16.197699   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:16.197839   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:16.219723   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:16.219840   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:16.236455   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:16.236539   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:16.249731   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:16.249809   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:16.274262   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:16.274342   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:16.299286   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:16.299363   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:16.311458   21370 logs.go:276] 0 containers: []
	W0520 04:58:16.311469   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:16.311533   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:16.322029   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:16.322048   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:16.322054   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:16.333502   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:16.333513   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:16.345186   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:16.345197   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:16.361509   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:16.361523   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:16.378373   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:16.378386   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:16.390342   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:16.390353   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:16.414019   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:16.414037   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:16.439020   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:16.439031   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:16.478711   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:16.478722   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:16.493262   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:16.493271   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:16.505014   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:16.505024   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:16.520719   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:16.520728   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:16.534015   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:16.534027   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:16.538396   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:16.538406   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:16.552607   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:16.552620   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:16.570455   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:16.570465   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:16.582869   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:16.582880   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:19.120886   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:24.123464   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:24.123889   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:24.160284   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:24.160416   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:24.180727   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:24.180813   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:24.195609   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:24.195673   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:24.208123   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:24.208198   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:24.219085   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:24.219157   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:24.230638   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:24.230712   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:24.242532   21370 logs.go:276] 0 containers: []
	W0520 04:58:24.242549   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:24.242617   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:24.258411   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:24.258431   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:24.258437   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:24.283434   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:24.283454   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:24.299173   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:24.299187   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:24.311268   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:24.311283   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:24.334254   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:24.334263   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:24.338531   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:24.338537   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:24.352863   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:24.352877   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:24.364736   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:24.364748   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:24.377208   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:24.377219   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:24.390355   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:24.390367   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:24.404295   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:24.404307   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:24.416368   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:24.416378   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:24.452144   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:24.452157   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:24.469507   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:24.469518   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:24.485449   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:24.485457   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:24.520190   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:24.520198   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:24.532475   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:24.532489   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:27.051562   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:32.054298   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:32.054531   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:32.066039   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:32.066120   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:32.077211   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:32.077290   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:32.088444   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:32.088508   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:32.100148   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:32.100213   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:32.113610   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:32.113675   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:32.124221   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:32.124293   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:32.135911   21370 logs.go:276] 0 containers: []
	W0520 04:58:32.135921   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:32.135971   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:32.146903   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:32.146932   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:32.146938   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:32.160755   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:32.160765   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:32.172929   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:32.172940   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:32.208352   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:32.208360   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:32.213058   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:32.213064   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:32.224792   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:32.224803   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:32.236816   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:32.236827   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:32.247937   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:32.247948   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:32.259453   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:32.259466   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:32.295224   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:32.295238   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:32.319973   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:32.319987   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:32.334510   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:32.334525   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:32.351141   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:32.351152   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:32.368557   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:32.368568   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:32.383503   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:32.383513   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:32.394749   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:32.394760   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:32.419487   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:32.419495   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:34.935786   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:39.936384   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:39.936504   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:39.948717   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:39.948789   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:39.959318   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:39.959384   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:39.969611   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:39.969685   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:39.980461   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:39.980530   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:39.994877   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:39.994948   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:40.005105   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:40.005175   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:40.015752   21370 logs.go:276] 0 containers: []
	W0520 04:58:40.015767   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:40.015820   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:40.026296   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:40.026315   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:40.026321   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:40.038606   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:40.038616   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:40.042682   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:40.042690   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:40.070769   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:40.070779   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:40.084542   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:40.084557   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:40.098664   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:40.098673   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:40.134878   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:40.134888   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:40.148940   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:40.148950   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:40.163712   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:40.163724   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:40.180011   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:40.180021   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:40.191221   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:40.191231   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:40.213994   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:40.214000   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:40.227652   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:40.227661   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:40.238294   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:40.238306   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:40.249801   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:40.249814   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:40.265157   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:40.265170   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:40.300103   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:40.300113   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:42.813601   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:47.816010   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:47.816117   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:47.827547   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:47.827614   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:47.839086   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:47.839159   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:47.850969   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:47.851044   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:47.862958   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:47.863032   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:47.875854   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:47.875929   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:47.888755   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:47.888827   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:47.900877   21370 logs.go:276] 0 containers: []
	W0520 04:58:47.900888   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:47.900954   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:47.914660   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:47.914681   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:47.914687   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:47.954350   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:47.954366   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:47.970426   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:47.970439   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:47.983161   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:47.983176   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:48.002103   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:48.002115   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:48.014580   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:48.014593   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:48.028137   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:48.028150   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:48.041154   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:48.041166   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:48.054791   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:48.054805   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:48.067925   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:48.067937   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:48.073170   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:48.073182   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:48.111837   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:48.111850   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:48.138501   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:48.138516   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:48.153481   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:48.153493   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:48.168203   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:48.168216   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:48.183713   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:48.183724   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:48.201437   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:48.201447   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:50.726906   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:55.729294   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:55.729438   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:55.740933   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:55.741016   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:55.752142   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:55.752208   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:55.762797   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:55.762867   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:55.773420   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:55.773496   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:55.783829   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:55.783895   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:55.794137   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:55.794192   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:55.804461   21370 logs.go:276] 0 containers: []
	W0520 04:58:55.804474   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:55.804528   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:55.815118   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:55.815135   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:55.815139   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:55.828684   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:55.828694   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:55.852105   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:55.852113   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:55.887998   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:55.888004   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:55.901954   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:55.901966   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:55.926893   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:55.926907   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:55.946287   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:55.946297   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:55.957993   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:55.958003   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:55.974947   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:55.974957   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:55.986425   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:55.986434   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:55.997939   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:55.997951   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:56.002747   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:56.002765   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:56.040826   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:56.040837   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:56.055940   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:56.055950   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:56.067848   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:56.067861   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:56.079688   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:56.079698   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:56.091496   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:56.091507   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:58.604986   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:03.607155   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:03.607353   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:59:03.626265   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:59:03.626366   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:59:03.640783   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:59:03.640861   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:59:03.652332   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:59:03.652423   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:59:03.663048   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:59:03.663119   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:59:03.677356   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:59:03.677422   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:59:03.687563   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:59:03.687633   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:59:03.697615   21370 logs.go:276] 0 containers: []
	W0520 04:59:03.697626   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:59:03.697682   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:59:03.708036   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:59:03.708056   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:59:03.708061   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:59:03.722568   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:59:03.722583   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:59:03.736768   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:59:03.736779   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:59:03.760729   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:59:03.760737   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:59:03.775424   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:59:03.775437   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:59:03.795144   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:59:03.795158   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:59:03.806337   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:59:03.806346   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:59:03.817640   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:59:03.817651   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:59:03.829736   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:59:03.829747   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:59:03.833989   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:59:03.833995   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:59:03.867471   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:59:03.867481   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:59:03.884684   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:59:03.884698   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:59:03.900267   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:59:03.900278   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:59:03.912219   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:59:03.912230   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:59:03.947772   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:59:03.947780   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:59:03.973226   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:59:03.973244   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:59:03.986304   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:59:03.986317   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:59:06.506958   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:11.509167   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:11.509279   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:59:11.520872   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:59:11.520949   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:59:11.532199   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:59:11.532269   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:59:11.544056   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:59:11.544130   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:59:11.555457   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:59:11.555530   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:59:11.571276   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:59:11.571352   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:59:11.584595   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:59:11.584677   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:59:11.600186   21370 logs.go:276] 0 containers: []
	W0520 04:59:11.600201   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:59:11.600272   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:59:11.611391   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:59:11.611413   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:59:11.611419   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:59:11.650460   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:59:11.650475   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:59:11.662899   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:59:11.662915   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:59:11.675767   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:59:11.675781   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:59:11.689064   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:59:11.689077   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:59:11.701968   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:59:11.701981   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:59:11.718222   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:59:11.718232   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:59:11.732572   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:59:11.732588   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:59:11.744934   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:59:11.744946   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:59:11.756916   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:59:11.756931   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:59:11.778448   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:59:11.778469   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:59:11.790952   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:59:11.790965   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:59:11.795101   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:59:11.795108   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:59:11.830329   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:59:11.830343   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:59:11.855197   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:59:11.855211   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:59:11.879236   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:59:11.879244   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:59:11.893041   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:59:11.893056   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:59:14.413997   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:19.416006   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:19.416126   21370 kubeadm.go:591] duration metric: took 4m4.329975333s to restartPrimaryControlPlane
	W0520 04:59:19.416211   21370 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 04:59:19.416249   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 04:59:20.413672   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:59:20.418820   21370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:59:20.421427   21370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:59:20.424759   21370 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:59:20.424765   21370 kubeadm.go:156] found existing configuration files:
	
	I0520 04:59:20.424787   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/admin.conf
	I0520 04:59:20.427739   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:59:20.427761   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:59:20.430540   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/kubelet.conf
	I0520 04:59:20.432934   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:59:20.432954   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:59:20.436185   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/controller-manager.conf
	I0520 04:59:20.439105   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:59:20.439132   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:59:20.441604   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/scheduler.conf
	I0520 04:59:20.444543   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:59:20.444561   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:59:20.447544   21370 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:59:20.465438   21370 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 04:59:20.465465   21370 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:59:20.521736   21370 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:59:20.521789   21370 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:59:20.521853   21370 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:59:20.578453   21370 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:59:20.584492   21370 out.go:204]   - Generating certificates and keys ...
	I0520 04:59:20.584530   21370 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:59:20.584557   21370 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:59:20.584597   21370 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 04:59:20.584669   21370 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 04:59:20.584717   21370 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 04:59:20.584755   21370 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 04:59:20.584794   21370 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 04:59:20.584866   21370 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 04:59:20.584969   21370 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 04:59:20.585041   21370 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 04:59:20.585063   21370 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 04:59:20.585104   21370 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:59:20.643393   21370 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:59:20.719269   21370 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:59:20.913872   21370 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:59:21.049483   21370 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:59:21.080902   21370 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:59:21.081992   21370 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:59:21.082014   21370 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:59:21.166310   21370 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:59:21.170518   21370 out.go:204]   - Booting up control plane ...
	I0520 04:59:21.170569   21370 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:59:21.170612   21370 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:59:21.170647   21370 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:59:21.170693   21370 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:59:21.170777   21370 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 04:59:25.671214   21370 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502199 seconds
	I0520 04:59:25.671373   21370 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:59:25.675402   21370 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:59:26.184306   21370 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:59:26.184395   21370 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-158000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:59:26.691840   21370 kubeadm.go:309] [bootstrap-token] Using token: vtrpym.bejbd6ufnp30co3p
	I0520 04:59:26.697994   21370 out.go:204]   - Configuring RBAC rules ...
	I0520 04:59:26.698067   21370 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:59:26.698764   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:59:26.704133   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:59:26.705237   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:59:26.706043   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:59:26.706894   21370 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:59:26.710219   21370 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:59:26.891893   21370 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:59:27.101009   21370 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:59:27.101503   21370 kubeadm.go:309] 
	I0520 04:59:27.101534   21370 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:59:27.101538   21370 kubeadm.go:309] 
	I0520 04:59:27.101587   21370 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:59:27.101593   21370 kubeadm.go:309] 
	I0520 04:59:27.101606   21370 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:59:27.101638   21370 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:59:27.101664   21370 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:59:27.101667   21370 kubeadm.go:309] 
	I0520 04:59:27.101710   21370 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:59:27.101719   21370 kubeadm.go:309] 
	I0520 04:59:27.101748   21370 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:59:27.101753   21370 kubeadm.go:309] 
	I0520 04:59:27.101779   21370 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:59:27.101835   21370 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:59:27.101880   21370 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:59:27.101885   21370 kubeadm.go:309] 
	I0520 04:59:27.101927   21370 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:59:27.101975   21370 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:59:27.101979   21370 kubeadm.go:309] 
	I0520 04:59:27.102020   21370 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vtrpym.bejbd6ufnp30co3p \
	I0520 04:59:27.102109   21370 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc \
	I0520 04:59:27.102124   21370 kubeadm.go:309] 	--control-plane 
	I0520 04:59:27.102130   21370 kubeadm.go:309] 
	I0520 04:59:27.102173   21370 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:59:27.102180   21370 kubeadm.go:309] 
	I0520 04:59:27.102225   21370 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vtrpym.bejbd6ufnp30co3p \
	I0520 04:59:27.102289   21370 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc 
	I0520 04:59:27.102370   21370 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:59:27.102378   21370 cni.go:84] Creating CNI manager for ""
	I0520 04:59:27.102387   21370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:59:27.105775   21370 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 04:59:27.112708   21370 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 04:59:27.115704   21370 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 04:59:27.121919   21370 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:59:27.121976   21370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:59:27.121977   21370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-158000 minikube.k8s.io/updated_at=2024_05_20T04_59_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=running-upgrade-158000 minikube.k8s.io/primary=true
	I0520 04:59:27.168438   21370 ops.go:34] apiserver oom_adj: -16
	I0520 04:59:27.168436   21370 kubeadm.go:1107] duration metric: took 46.505667ms to wait for elevateKubeSystemPrivileges
	W0520 04:59:27.168559   21370 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:59:27.168565   21370 kubeadm.go:393] duration metric: took 4m12.096962583s to StartCluster
	I0520 04:59:27.168575   21370 settings.go:142] acquiring lock: {Name:mkb0015ab6abb1526406adb43e2b3d4392387c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:59:27.168729   21370 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:59:27.169086   21370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/kubeconfig: {Name:mk3ada957134ebfd6ba10dc19bcfe4b23657e56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:59:27.169284   21370 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:59:27.172886   21370 out.go:177] * Verifying Kubernetes components...
	I0520 04:59:27.169331   21370 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:59:27.169475   21370 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:59:27.180755   21370 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-158000"
	I0520 04:59:27.180759   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:59:27.180766   21370 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-158000"
	W0520 04:59:27.180769   21370 addons.go:243] addon storage-provisioner should already be in state true
	I0520 04:59:27.180778   21370 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0520 04:59:27.180781   21370 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-158000"
	I0520 04:59:27.180790   21370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-158000"
	I0520 04:59:27.181887   21370 kapi.go:59] client config for running-upgrade-158000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/client.key", CAFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040d0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:59:27.182797   21370 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-158000"
	W0520 04:59:27.182802   21370 addons.go:243] addon default-storageclass should already be in state true
	I0520 04:59:27.182811   21370 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0520 04:59:27.186703   21370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:59:27.189798   21370 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:59:27.189804   21370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:59:27.189810   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0520 04:59:27.190547   21370 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:59:27.190552   21370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:59:27.190556   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0520 04:59:27.271711   21370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:59:27.276446   21370 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:59:27.276483   21370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:59:27.280813   21370 api_server.go:72] duration metric: took 111.518875ms to wait for apiserver process to appear ...
	I0520 04:59:27.280821   21370 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:59:27.280828   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:27.302726   21370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:59:27.305548   21370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:59:32.282892   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:32.282935   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:37.283291   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:37.283329   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:42.283698   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:42.283719   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:47.284157   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:47.284200   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:52.285202   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:52.285246   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:57.286109   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:57.286157   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 04:59:57.674082   21370 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 04:59:57.677489   21370 out.go:177] * Enabled addons: storage-provisioner
	I0520 04:59:57.685333   21370 addons.go:505] duration metric: took 30.516250416s for enable addons: enabled=[storage-provisioner]
	I0520 05:00:02.287435   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:02.287480   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:07.288973   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:07.289015   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:12.290870   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:12.290906   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:17.291477   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:17.291501   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:22.293628   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:22.293650   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:27.295787   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:27.295884   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:27.306533   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:27.306608   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:27.318511   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:27.318585   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:27.328993   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:27.329061   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:27.340550   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:27.340618   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:27.351266   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:27.351331   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:27.361552   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:27.361618   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:27.371917   21370 logs.go:276] 0 containers: []
	W0520 05:00:27.371927   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:27.371984   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:27.381911   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:27.381927   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:27.381933   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:27.393209   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:27.393225   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:27.430801   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:27.430809   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:27.434852   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:27.434858   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:27.473487   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:27.473498   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:27.485887   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:27.485897   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:27.500249   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:27.500257   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:27.517160   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:27.517170   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:27.531494   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:27.531506   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:27.549672   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:27.549685   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:27.561746   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:27.561757   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:27.574346   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:27.574356   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:27.586749   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:27.586763   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:30.112425   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:35.113152   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:35.113230   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:35.123813   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:35.123883   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:35.134367   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:35.134437   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:35.145006   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:35.145078   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:35.155347   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:35.155408   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:35.165421   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:35.165489   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:35.175330   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:35.175397   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:35.185793   21370 logs.go:276] 0 containers: []
	W0520 05:00:35.185819   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:35.185877   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:35.196250   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:35.196268   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:35.196274   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:35.235557   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:35.235566   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:35.239934   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:35.239944   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:35.278326   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:35.278338   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:35.293741   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:35.293752   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:35.305104   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:35.305115   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:35.320542   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:35.320554   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:35.344892   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:35.344902   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:35.358686   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:35.358697   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:35.373183   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:35.373194   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:35.386273   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:35.386285   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:35.401429   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:35.401441   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:35.412863   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:35.412876   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:37.932886   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:42.933472   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:42.933558   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:42.945765   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:42.945842   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:42.960129   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:42.960202   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:42.971024   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:42.971098   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:42.981420   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:42.981488   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:42.993236   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:42.993315   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:43.004958   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:43.005035   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:43.016241   21370 logs.go:276] 0 containers: []
	W0520 05:00:43.016253   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:43.016310   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:43.026569   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:43.026586   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:43.026593   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:43.063324   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:43.063332   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:43.067615   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:43.067620   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:43.081957   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:43.081971   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:43.095253   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:43.095263   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:43.106658   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:43.106672   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:43.121085   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:43.121098   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:43.144640   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:43.144647   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:43.179048   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:43.179062   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:43.191083   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:43.191097   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:43.202266   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:43.202290   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:43.219338   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:43.219352   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:43.230754   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:43.230768   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:45.744035   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:50.745048   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:50.745123   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:50.756583   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:50.756659   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:50.773032   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:50.773104   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:50.787397   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:50.787467   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:50.799726   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:50.799802   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:50.811521   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:50.811609   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:50.823388   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:50.823461   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:50.834616   21370 logs.go:276] 0 containers: []
	W0520 05:00:50.834629   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:50.834690   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:50.846142   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:50.846157   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:50.846163   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:50.871014   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:50.871025   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:50.875537   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:50.875548   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:50.912980   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:50.912993   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:50.928285   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:50.928294   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:50.948270   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:50.948284   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:50.964475   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:50.964484   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:50.981476   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:50.981485   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:50.996973   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:50.996983   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:51.035949   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:51.035959   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:51.058148   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:51.058158   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:51.069880   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:51.069892   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:51.081494   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:51.081504   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:53.594916   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:58.597083   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:58.597188   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:58.609222   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:58.609286   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:58.620679   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:58.620755   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:58.631948   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:58.632026   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:58.643036   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:58.643119   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:58.655491   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:58.655571   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:58.667270   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:58.667343   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:58.679259   21370 logs.go:276] 0 containers: []
	W0520 05:00:58.679269   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:58.679331   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:58.690331   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:58.690346   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:58.690350   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:58.709103   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:58.709116   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:58.722429   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:58.722439   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:58.734716   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:58.734727   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:58.739198   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:58.739211   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:58.752351   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:58.752363   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:58.768401   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:58.768412   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:58.789523   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:58.789531   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:58.806649   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:58.806661   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:58.820194   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:58.820208   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:58.835962   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:58.835976   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:58.860815   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:58.860825   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:58.898490   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:58.898498   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:01.435417   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:06.437817   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:06.437974   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:06.455137   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:06.455219   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:06.469456   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:06.469529   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:06.481921   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:06.481988   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:06.494313   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:06.494370   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:06.506217   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:06.506279   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:06.523679   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:06.523750   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:06.535918   21370 logs.go:276] 0 containers: []
	W0520 05:01:06.535928   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:06.535981   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:06.547954   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:06.547971   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:06.547977   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:06.565258   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:06.565269   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:06.584777   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:06.584786   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:06.609677   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:06.609695   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:06.648496   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:06.648514   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:06.656036   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:06.656049   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:06.672928   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:06.672940   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:06.688707   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:06.688718   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:06.707317   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:06.707330   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:06.719830   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:06.719842   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:06.759507   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:06.759519   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:06.773163   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:06.773171   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:06.791575   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:06.791584   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:09.307389   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:14.309717   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:14.309927   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:14.333204   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:14.333323   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:14.348529   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:14.348607   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:14.361461   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:14.361526   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:14.373669   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:14.373744   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:14.385551   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:14.385621   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:14.398387   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:14.398462   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:14.410454   21370 logs.go:276] 0 containers: []
	W0520 05:01:14.410466   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:14.410523   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:14.422918   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:14.422933   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:14.422939   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:14.436505   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:14.436515   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:14.450122   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:14.450136   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:14.476713   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:14.476723   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:14.490234   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:14.490252   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:14.528305   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:14.528321   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:14.543492   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:14.543505   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:14.556602   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:14.556611   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:14.573393   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:14.573404   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:14.596494   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:14.599211   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:14.637256   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:14.637267   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:14.642112   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:14.642123   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:14.669956   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:14.669969   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:17.194251   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:22.196500   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:22.196871   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:22.236503   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:22.236625   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:22.255436   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:22.255519   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:22.269295   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:22.269370   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:22.282135   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:22.282212   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:22.293539   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:22.293614   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:22.312954   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:22.313020   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:22.325019   21370 logs.go:276] 0 containers: []
	W0520 05:01:22.325027   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:22.325058   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:22.337422   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:22.337437   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:22.337444   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:22.379105   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:22.379124   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:22.394682   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:22.394694   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:22.410900   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:22.410909   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:22.429772   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:22.429782   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:22.442704   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:22.442715   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:22.468078   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:22.468089   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:22.482013   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:22.482026   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:22.486867   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:22.486878   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:22.527246   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:22.527261   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:22.547060   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:22.547072   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:22.560978   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:22.560992   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:22.574545   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:22.574557   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:25.094981   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:30.097333   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:30.097545   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:30.120173   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:30.120267   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:30.135365   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:30.135439   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:30.148315   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:30.148376   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:30.159406   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:30.159475   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:30.170520   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:30.170589   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:30.181544   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:30.181607   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:30.192735   21370 logs.go:276] 0 containers: []
	W0520 05:01:30.192745   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:30.192801   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:30.203620   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:30.203636   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:30.203642   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:30.241343   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:30.241354   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:30.258211   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:30.258224   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:30.274678   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:30.274692   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:30.288635   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:30.288644   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:30.308168   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:30.308180   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:30.322187   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:30.322199   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:30.347346   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:30.347363   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:30.386344   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:30.386359   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:30.392860   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:30.392876   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:30.408471   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:30.408485   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:30.425299   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:30.425311   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:30.439380   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:30.439395   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:32.954496   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:37.956894   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:37.957389   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:37.971735   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:37.971816   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:37.990580   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:37.990650   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:38.001272   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:38.001338   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:38.015234   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:38.015301   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:38.025570   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:38.025643   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:38.036136   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:38.036195   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:38.050312   21370 logs.go:276] 0 containers: []
	W0520 05:01:38.050323   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:38.050385   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:38.060817   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:38.060833   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:38.060838   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:38.065816   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:38.065824   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:38.079592   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:38.079603   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:38.092703   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:38.092715   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:38.104618   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:38.104631   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:38.119448   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:38.119460   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:38.132205   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:38.132218   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:38.170339   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:38.170351   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:38.207847   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:38.207857   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:38.220171   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:38.220182   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:38.232817   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:38.232828   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:38.251915   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:38.251926   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:38.278690   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:38.278702   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:40.795166   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:45.796864   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:45.797080   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:45.820566   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:45.820676   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:45.835305   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:45.835378   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:45.847670   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:45.847739   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:45.859261   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:45.859327   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:45.874772   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:45.874844   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:45.885998   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:45.886065   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:45.896247   21370 logs.go:276] 0 containers: []
	W0520 05:01:45.896260   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:45.896336   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:45.909803   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:45.909824   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:45.909833   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:45.946519   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:45.946528   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:45.962686   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:01:45.962695   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:01:45.973725   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:45.973740   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:45.991700   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:45.991709   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:46.004202   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:46.004213   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:46.017723   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:46.017732   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:46.028829   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:46.028838   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:46.040068   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:46.040078   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:46.079596   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:46.079609   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:46.085021   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:01:46.085032   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:01:46.097610   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:46.097622   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:46.111326   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:46.111337   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:46.126986   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:46.126998   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:46.152145   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:46.152163   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:48.667340   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:53.670085   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:53.670334   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:53.693421   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:53.693533   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:53.708113   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:53.708188   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:53.720994   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:53.721067   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:53.731342   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:53.731406   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:53.741887   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:53.741954   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:53.752421   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:53.752490   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:53.762698   21370 logs.go:276] 0 containers: []
	W0520 05:01:53.762710   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:53.762768   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:53.773395   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:53.773411   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:53.773415   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:53.786000   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:53.786011   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:53.790763   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:53.790771   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:53.802927   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:53.802937   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:53.814748   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:53.814759   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:53.832747   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:53.832758   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:53.871107   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:53.871117   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:53.884952   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:01:53.884964   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:01:53.897029   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:53.897040   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:53.908591   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:01:53.908601   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:01:53.919666   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:53.919679   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:53.936864   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:53.936877   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:53.962417   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:53.962428   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:53.974758   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:53.974768   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:54.029316   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:54.029328   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:56.546720   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:01.549457   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:01.549843   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:01.581608   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:01.581739   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:01.601156   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:01.601255   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:01.615694   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:01.615777   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:01.627766   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:01.627835   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:01.638728   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:01.638799   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:01.649267   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:01.649340   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:01.660337   21370 logs.go:276] 0 containers: []
	W0520 05:02:01.660347   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:01.660409   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:01.671433   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:01.671451   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:01.671457   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:01.683719   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:01.683729   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:01.708495   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:01.708505   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:01.713495   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:01.713506   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:01.724939   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:01.724952   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:01.738995   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:01.739004   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:01.777802   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:01.777816   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:01.794861   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:01.794874   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:01.809934   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:01.809947   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:01.834182   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:01.834195   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:01.846000   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:01.846012   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:01.857541   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:01.857552   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:01.893465   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:01.893476   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:01.906427   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:01.906439   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:01.918728   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:01.918741   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:04.433394   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:09.435774   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:09.436135   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:09.474173   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:09.474306   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:09.493979   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:09.494077   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:09.508524   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:09.508594   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:09.520775   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:09.520851   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:09.532011   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:09.532073   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:09.542499   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:09.542568   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:09.553239   21370 logs.go:276] 0 containers: []
	W0520 05:02:09.553247   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:09.553302   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:09.568146   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:09.568163   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:09.568169   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:09.580248   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:09.580258   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:09.592400   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:09.592411   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:09.610774   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:09.610783   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:09.625632   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:09.625645   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:09.640121   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:09.640132   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:09.652214   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:09.652224   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:09.667358   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:09.667369   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:09.705872   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:09.705883   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:09.710429   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:09.710437   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:09.747423   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:09.747437   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:09.759272   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:09.759286   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:09.770728   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:09.770738   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:09.798189   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:09.798205   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:09.816329   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:09.816337   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:12.334025   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:17.336314   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:17.336480   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:17.351432   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:17.351502   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:17.361830   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:17.361892   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:17.372787   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:17.372861   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:17.383839   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:17.383907   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:17.394725   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:17.394796   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:17.405099   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:17.405163   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:17.415232   21370 logs.go:276] 0 containers: []
	W0520 05:02:17.415246   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:17.415302   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:17.425821   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:17.425836   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:17.425841   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:17.441161   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:17.441173   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:17.459857   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:17.459867   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:17.474533   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:17.474548   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:17.485890   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:17.485900   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:17.502849   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:17.502859   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:17.507299   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:17.507306   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:17.518482   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:17.518496   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:17.555762   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:17.555769   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:17.591258   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:17.591269   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:17.605744   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:17.605754   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:17.617501   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:17.617512   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:17.628629   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:17.628639   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:17.647281   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:17.647291   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:17.671783   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:17.671791   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:20.185848   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:25.188037   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:25.188152   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:25.200008   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:25.200091   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:25.210977   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:25.211045   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:25.221600   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:25.221671   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:25.231670   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:25.231738   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:25.242827   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:25.242894   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:25.253571   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:25.253637   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:25.263390   21370 logs.go:276] 0 containers: []
	W0520 05:02:25.263402   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:25.263461   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:25.273747   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:25.273767   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:25.273771   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:25.288747   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:25.288757   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:25.300450   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:25.300460   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:25.305430   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:25.305437   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:25.319017   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:25.319027   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:25.330285   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:25.330296   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:25.342480   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:25.342491   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:25.364261   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:25.364271   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:25.376198   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:25.376207   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:25.412531   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:25.412542   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:25.424561   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:25.424573   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:25.442374   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:25.442385   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:25.465926   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:25.465937   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:25.478500   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:25.478515   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:25.491612   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:25.491623   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:28.031772   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:33.032561   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:33.032861   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:33.062981   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:33.063105   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:33.081363   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:33.081455   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:33.095195   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:33.095272   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:33.106961   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:33.107024   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:33.117206   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:33.117277   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:33.128020   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:33.128087   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:33.137873   21370 logs.go:276] 0 containers: []
	W0520 05:02:33.137884   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:33.137934   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:33.148177   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:33.148192   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:33.148197   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:33.162708   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:33.162719   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:33.177064   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:33.177076   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:33.194260   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:33.194271   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:33.205429   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:33.205442   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:33.245208   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:33.245225   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:33.260139   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:33.260149   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:33.275394   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:33.275405   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:33.287054   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:33.287065   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:33.298408   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:33.298418   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:33.309815   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:33.309827   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:33.334023   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:33.334030   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:33.338264   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:33.338269   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:33.371755   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:33.371765   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:33.383561   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:33.383575   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:35.897080   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:40.899548   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:40.899786   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:40.922613   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:40.922731   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:40.938754   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:40.938842   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:40.951490   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:40.951564   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:40.965045   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:40.965114   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:40.975895   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:40.975958   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:40.986904   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:40.986969   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:40.996859   21370 logs.go:276] 0 containers: []
	W0520 05:02:40.996872   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:40.996952   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:41.007805   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:41.007821   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:41.007826   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:41.012914   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:41.012924   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:41.024759   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:41.024772   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:41.042437   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:41.042449   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:41.054114   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:41.054124   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:41.068019   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:41.068033   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:41.082447   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:41.082461   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:41.106108   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:41.106115   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:41.144773   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:41.144789   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:41.165972   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:41.165982   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:41.177990   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:41.178001   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:41.189702   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:41.189713   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:41.203512   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:41.203522   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:41.224047   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:41.224058   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:41.236748   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:41.236759   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:43.775949   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:48.778230   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:48.778471   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:48.794597   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:48.794686   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:48.806649   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:48.806719   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:48.818242   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:48.818313   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:48.828403   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:48.828473   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:48.839263   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:48.839329   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:48.849825   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:48.849891   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:48.875895   21370 logs.go:276] 0 containers: []
	W0520 05:02:48.875904   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:48.875958   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:48.885908   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:48.885926   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:48.885932   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:48.902955   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:48.902969   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:48.914286   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:48.914300   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:48.925633   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:48.925644   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:48.964735   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:48.964754   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:48.969744   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:48.969751   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:48.983972   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:48.983986   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:48.998144   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:48.998157   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:49.013271   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:49.013285   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:49.034390   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:49.034401   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:49.048163   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:49.048174   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:49.071416   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:49.071426   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:49.105712   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:49.105722   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:49.117514   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:49.117525   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:49.128944   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:49.128954   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:51.646959   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:56.649447   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:56.649917   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:56.689074   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:56.689217   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:56.711628   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:56.711732   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:56.726725   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:56.726799   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:56.739213   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:56.739287   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:56.749984   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:56.750050   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:56.761227   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:56.761296   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:56.772009   21370 logs.go:276] 0 containers: []
	W0520 05:02:56.772021   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:56.772084   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:56.783989   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:56.784006   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:56.784012   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:56.821588   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:56.821598   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:56.839838   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:56.839847   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:56.851841   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:56.851852   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:56.869068   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:56.869079   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:56.873706   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:56.873713   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:56.884824   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:56.884833   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:56.896100   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:56.896113   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:56.920328   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:56.920335   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:56.955074   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:56.955087   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:56.972029   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:56.972041   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:56.986378   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:56.986390   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:56.998098   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:56.998110   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:57.016285   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:57.016296   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:57.027469   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:57.027479   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:59.540753   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:04.542932   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:04.543062   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:03:04.554797   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:03:04.554877   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:03:04.566234   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:03:04.566303   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:03:04.578881   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:03:04.578973   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:03:04.590081   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:03:04.590147   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:03:04.601322   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:03:04.601394   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:03:04.613019   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:03:04.613093   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:03:04.624484   21370 logs.go:276] 0 containers: []
	W0520 05:03:04.624496   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:03:04.624555   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:03:04.636034   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:03:04.636054   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:03:04.636060   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:03:04.649652   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:03:04.649663   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:03:04.689440   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:03:04.689465   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:03:04.703715   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:03:04.703725   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:03:04.717107   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:03:04.717118   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:03:04.729337   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:03:04.729348   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:03:04.749515   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:03:04.749531   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:03:04.754771   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:03:04.754779   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:03:04.798548   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:03:04.798559   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:03:04.811325   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:03:04.811337   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:03:04.824378   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:03:04.824388   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:03:04.836645   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:03:04.836653   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:03:04.865817   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:03:04.865831   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:03:04.878124   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:03:04.878136   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:03:04.893361   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:03:04.893371   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:03:07.422058   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:12.424273   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:12.424455   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:03:12.440954   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:03:12.441033   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:03:12.453425   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:03:12.453491   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:03:12.464087   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:03:12.464159   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:03:12.474278   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:03:12.474344   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:03:12.484887   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:03:12.484945   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:03:12.494755   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:03:12.494821   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:03:12.506353   21370 logs.go:276] 0 containers: []
	W0520 05:03:12.506366   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:03:12.506433   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:03:12.521519   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:03:12.521537   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:03:12.521542   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:03:12.533259   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:03:12.533274   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:03:12.544940   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:03:12.544951   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:03:12.579812   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:03:12.579823   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:03:12.596153   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:03:12.596162   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:03:12.607903   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:03:12.607916   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:03:12.619585   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:03:12.619596   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:03:12.633926   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:03:12.633936   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:03:12.648566   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:03:12.648577   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:03:12.660892   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:03:12.660903   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:03:12.674205   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:03:12.674219   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:03:12.689341   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:03:12.689351   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:03:12.706937   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:03:12.706951   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:03:12.730230   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:03:12.730238   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:03:12.767947   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:03:12.767964   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:03:15.274993   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:20.277282   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:20.277495   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:03:20.301953   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:03:20.302074   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:03:20.318665   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:03:20.318745   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:03:20.331240   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:03:20.331325   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:03:20.341932   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:03:20.341999   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:03:20.353262   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:03:20.353331   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:03:20.364045   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:03:20.364118   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:03:20.374123   21370 logs.go:276] 0 containers: []
	W0520 05:03:20.374134   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:03:20.374191   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:03:20.384882   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:03:20.384901   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:03:20.384906   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:03:20.402988   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:03:20.403001   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:03:20.415495   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:03:20.415507   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:03:20.435864   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:03:20.435875   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:03:20.448026   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:03:20.448037   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:03:20.452334   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:03:20.452340   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:03:20.486772   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:03:20.486787   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:03:20.501595   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:03:20.501604   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:03:20.513494   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:03:20.513504   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:03:20.525566   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:03:20.525578   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:03:20.543427   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:03:20.543439   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:03:20.581896   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:03:20.581910   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:03:20.600804   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:03:20.600818   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:03:20.620570   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:03:20.620584   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:03:20.636983   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:03:20.636994   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:03:23.163110   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:28.165298   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:28.171080   21370 out.go:177] 
	W0520 05:03:28.174041   21370 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 05:03:28.174059   21370 out.go:239] * 
	* 
	W0520 05:03:28.175262   21370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:03:28.188976   21370 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-158000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-20 05:03:28.293752 -0700 PDT m=+1289.205089167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-158000 -n running-upgrade-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-158000 -n running-upgrade-158000: exit status 2 (15.653262875s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-158000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-223000          | force-systemd-flag-223000 | jenkins | v1.33.1 | 20 May 24 04:53 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-420000              | force-systemd-env-420000  | jenkins | v1.33.1 | 20 May 24 04:53 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-420000           | force-systemd-env-420000  | jenkins | v1.33.1 | 20 May 24 04:53 PDT | 20 May 24 04:53 PDT |
	| start   | -p docker-flags-422000                | docker-flags-422000       | jenkins | v1.33.1 | 20 May 24 04:53 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-223000             | force-systemd-flag-223000 | jenkins | v1.33.1 | 20 May 24 04:53 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-223000          | force-systemd-flag-223000 | jenkins | v1.33.1 | 20 May 24 04:53 PDT | 20 May 24 04:53 PDT |
	| start   | -p cert-expiration-558000             | cert-expiration-558000    | jenkins | v1.33.1 | 20 May 24 04:53 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-422000 ssh               | docker-flags-422000       | jenkins | v1.33.1 | 20 May 24 04:54 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-422000 ssh               | docker-flags-422000       | jenkins | v1.33.1 | 20 May 24 04:54 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-422000                | docker-flags-422000       | jenkins | v1.33.1 | 20 May 24 04:54 PDT | 20 May 24 04:54 PDT |
	| start   | -p cert-options-020000                | cert-options-020000       | jenkins | v1.33.1 | 20 May 24 04:54 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-020000 ssh               | cert-options-020000       | jenkins | v1.33.1 | 20 May 24 04:54 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-020000 -- sudo        | cert-options-020000       | jenkins | v1.33.1 | 20 May 24 04:54 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-020000                | cert-options-020000       | jenkins | v1.33.1 | 20 May 24 04:54 PDT | 20 May 24 04:54 PDT |
	| start   | -p running-upgrade-158000             | minikube                  | jenkins | v1.26.0 | 20 May 24 04:54 PDT | 20 May 24 04:55 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-158000             | running-upgrade-158000    | jenkins | v1.33.1 | 20 May 24 04:55 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-558000             | cert-expiration-558000    | jenkins | v1.33.1 | 20 May 24 04:57 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-558000             | cert-expiration-558000    | jenkins | v1.33.1 | 20 May 24 04:57 PDT | 20 May 24 04:57 PDT |
	| start   | -p kubernetes-upgrade-839000          | kubernetes-upgrade-839000 | jenkins | v1.33.1 | 20 May 24 04:57 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-839000          | kubernetes-upgrade-839000 | jenkins | v1.33.1 | 20 May 24 04:57 PDT | 20 May 24 04:57 PDT |
	| start   | -p kubernetes-upgrade-839000          | kubernetes-upgrade-839000 | jenkins | v1.33.1 | 20 May 24 04:57 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-839000          | kubernetes-upgrade-839000 | jenkins | v1.33.1 | 20 May 24 04:57 PDT | 20 May 24 04:57 PDT |
	| start   | -p stopped-upgrade-298000             | minikube                  | jenkins | v1.26.0 | 20 May 24 04:57 PDT | 20 May 24 04:58 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-298000 stop           | minikube                  | jenkins | v1.26.0 | 20 May 24 04:58 PDT | 20 May 24 04:58 PDT |
	| start   | -p stopped-upgrade-298000             | stopped-upgrade-298000    | jenkins | v1.33.1 | 20 May 24 04:58 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:58:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:58:25.069623   21535 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:58:25.069839   21535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:25.069843   21535 out.go:304] Setting ErrFile to fd 2...
	I0520 04:58:25.069846   21535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:25.070018   21535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:58:25.071250   21535 out.go:298] Setting JSON to false
	I0520 04:58:25.091257   21535 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10676,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:58:25.091335   21535 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:25.095396   21535 out.go:177] * [stopped-upgrade-298000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:58:25.103344   21535 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:58:25.103429   21535 notify.go:220] Checking for updates...
	I0520 04:58:25.110394   21535 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:58:25.113395   21535 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:58:25.116411   21535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:25.119373   21535 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:58:25.122355   21535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:25.125661   21535 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:58:25.129373   21535 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 04:58:25.132324   21535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:25.136395   21535 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:58:25.142291   21535 start.go:297] selected driver: qemu2
	I0520 04:58:25.142296   21535 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:58:25.142347   21535 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:25.144837   21535 cni.go:84] Creating CNI manager for ""
	I0520 04:58:25.144855   21535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:58:25.144884   21535 start.go:340] cluster config:
	{Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:58:25.144939   21535 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:25.152408   21535 out.go:177] * Starting "stopped-upgrade-298000" primary control-plane node in "stopped-upgrade-298000" cluster
	I0520 04:58:25.156383   21535 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:58:25.156409   21535 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 04:58:25.156420   21535 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:25.156485   21535 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:58:25.156492   21535 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 04:58:25.156552   21535 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/config.json ...
	I0520 04:58:25.156895   21535 start.go:360] acquireMachinesLock for stopped-upgrade-298000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:25.156931   21535 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "stopped-upgrade-298000"
	I0520 04:58:25.156940   21535 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:58:25.156946   21535 fix.go:54] fixHost starting: 
	I0520 04:58:25.157069   21535 fix.go:112] recreateIfNeeded on stopped-upgrade-298000: state=Stopped err=<nil>
	W0520 04:58:25.157079   21535 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:58:25.165337   21535 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-298000" ...
	I0520 04:58:27.051562   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:25.169419   21535 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/qemu.pid -nic user,model=virtio,hostfwd=tcp::54138-:22,hostfwd=tcp::54139-:2376,hostname=stopped-upgrade-298000 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/disk.qcow2
	I0520 04:58:25.213945   21535 main.go:141] libmachine: STDOUT: 
	I0520 04:58:25.213970   21535 main.go:141] libmachine: STDERR: 
	I0520 04:58:25.213974   21535 main.go:141] libmachine: Waiting for VM to start (ssh -p 54138 docker@127.0.0.1)...
	I0520 04:58:32.054298   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:32.054531   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:32.066039   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:32.066120   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:32.077211   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:32.077290   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:32.088444   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:32.088508   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:32.100148   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:32.100213   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:32.113610   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:32.113675   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:32.124221   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:32.124293   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:32.135911   21370 logs.go:276] 0 containers: []
	W0520 04:58:32.135921   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:32.135971   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:32.146903   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:32.146932   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:32.146938   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:32.160755   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:32.160765   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:32.172929   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:32.172940   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:32.208352   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:32.208360   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:32.213058   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:32.213064   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:32.224792   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:32.224803   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:32.236816   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:32.236827   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:32.247937   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:32.247948   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:32.259453   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:32.259466   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:32.295224   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:32.295238   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:32.319973   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:32.319987   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:32.334510   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:32.334525   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:32.351141   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:32.351152   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:32.368557   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:32.368568   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:32.383503   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:32.383513   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:32.394749   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:32.394760   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:32.419487   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:32.419495   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:34.935786   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:39.936384   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:39.936504   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:39.948717   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:39.948789   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:39.959318   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:39.959384   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:39.969611   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:39.969685   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:39.980461   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:39.980530   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:39.994877   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:39.994948   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:40.005105   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:40.005175   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:40.015752   21370 logs.go:276] 0 containers: []
	W0520 04:58:40.015767   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:40.015820   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:40.026296   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:40.026315   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:40.026321   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:40.038606   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:40.038616   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:40.042682   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:40.042690   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:40.070769   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:40.070779   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:40.084542   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:40.084557   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:40.098664   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:40.098673   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:40.134878   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:40.134888   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:40.148940   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:40.148950   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:40.163712   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:40.163724   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:40.180011   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:40.180021   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:40.191221   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:40.191231   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:40.213994   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:40.214000   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:40.227652   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:40.227661   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:40.238294   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:40.238306   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:40.249801   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:40.249814   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:40.265157   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:40.265170   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:40.300103   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:40.300113   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:42.813601   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:44.609972   21535 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/config.json ...
	I0520 04:58:44.610283   21535 machine.go:94] provisionDockerMachine start ...
	I0520 04:58:44.610378   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.610621   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.610628   21535 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:58:44.673743   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:58:44.673756   21535 buildroot.go:166] provisioning hostname "stopped-upgrade-298000"
	I0520 04:58:44.673812   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.673941   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.673948   21535 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-298000 && echo "stopped-upgrade-298000" | sudo tee /etc/hostname
	I0520 04:58:44.736023   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-298000
	
	I0520 04:58:44.736075   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.736193   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.736229   21535 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-298000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-298000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-298000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:58:44.795663   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:58:44.795680   21535 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18929-19024/.minikube CaCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18929-19024/.minikube}
	I0520 04:58:44.795692   21535 buildroot.go:174] setting up certificates
	I0520 04:58:44.795701   21535 provision.go:84] configureAuth start
	I0520 04:58:44.795707   21535 provision.go:143] copyHostCerts
	I0520 04:58:44.795774   21535 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem, removing ...
	I0520 04:58:44.795784   21535 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem
	I0520 04:58:44.795894   21535 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem (1082 bytes)
	I0520 04:58:44.796090   21535 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem, removing ...
	I0520 04:58:44.796094   21535 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem
	I0520 04:58:44.796140   21535 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem (1123 bytes)
	I0520 04:58:44.796250   21535 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem, removing ...
	I0520 04:58:44.796253   21535 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem
	I0520 04:58:44.796293   21535 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem (1675 bytes)
	I0520 04:58:44.796393   21535 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-298000 san=[127.0.0.1 localhost minikube stopped-upgrade-298000]
	I0520 04:58:44.858117   21535 provision.go:177] copyRemoteCerts
	I0520 04:58:44.858177   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:58:44.858188   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 04:58:44.887290   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 04:58:44.893785   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 04:58:44.900192   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 04:58:44.909095   21535 provision.go:87] duration metric: took 113.390542ms to configureAuth
	I0520 04:58:44.909106   21535 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:58:44.909222   21535 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:58:44.909263   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.909406   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.909411   21535 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:58:44.963710   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:58:44.963719   21535 buildroot.go:70] root file system type: tmpfs
	I0520 04:58:44.963768   21535 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:58:44.963813   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.963932   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.963964   21535 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:58:45.020818   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:58:45.020860   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:45.020950   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:45.020958   21535 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:58:45.381013   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:58:45.381027   21535 machine.go:97] duration metric: took 770.743041ms to provisionDockerMachine
	I0520 04:58:45.381034   21535 start.go:293] postStartSetup for "stopped-upgrade-298000" (driver="qemu2")
	I0520 04:58:45.381041   21535 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:58:45.381104   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:58:45.381114   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 04:58:45.409066   21535 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:58:45.410324   21535 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 04:58:45.410331   21535 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18929-19024/.minikube/addons for local assets ...
	I0520 04:58:45.410414   21535 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18929-19024/.minikube/files for local assets ...
	I0520 04:58:45.410514   21535 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem -> 195172.pem in /etc/ssl/certs
	I0520 04:58:45.410626   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:58:45.413077   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem --> /etc/ssl/certs/195172.pem (1708 bytes)
	I0520 04:58:45.420015   21535 start.go:296] duration metric: took 38.975666ms for postStartSetup
	I0520 04:58:45.420028   21535 fix.go:56] duration metric: took 20.26323s for fixHost
	I0520 04:58:45.420062   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:45.420160   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:45.420165   21535 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 04:58:45.472877   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206325.165230337
	
	I0520 04:58:45.472886   21535 fix.go:216] guest clock: 1716206325.165230337
	I0520 04:58:45.472890   21535 fix.go:229] Guest: 2024-05-20 04:58:45.165230337 -0700 PDT Remote: 2024-05-20 04:58:45.42003 -0700 PDT m=+20.383817251 (delta=-254.799663ms)
	I0520 04:58:45.472903   21535 fix.go:200] guest clock delta is within tolerance: -254.799663ms
	I0520 04:58:45.472906   21535 start.go:83] releasing machines lock for "stopped-upgrade-298000", held for 20.316117625s
	I0520 04:58:45.472962   21535 ssh_runner.go:195] Run: cat /version.json
	I0520 04:58:45.472972   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 04:58:45.472962   21535 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:58:45.473002   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	W0520 04:58:45.473602   21535 sshutil.go:64] dial failure (will retry): dial tcp [::1]:54138: connect: connection refused
	I0520 04:58:45.473626   21535 retry.go:31] will retry after 232.14207ms: dial tcp [::1]:54138: connect: connection refused
	W0520 04:58:45.750848   21535 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 04:58:45.750973   21535 ssh_runner.go:195] Run: systemctl --version
	I0520 04:58:45.754662   21535 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 04:58:45.757631   21535 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:58:45.757682   21535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 04:58:45.762696   21535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 04:58:45.770345   21535 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:58:45.770358   21535 start.go:494] detecting cgroup driver to use...
	I0520 04:58:45.770477   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:58:45.780018   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 04:58:45.784014   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:58:45.787673   21535 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:58:45.787702   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:58:45.791146   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:58:45.794513   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:58:45.797467   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:58:45.800287   21535 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:58:45.803616   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:58:45.807095   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:58:45.810024   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:58:45.813011   21535 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:58:45.816053   21535 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:58:45.819159   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:45.895882   21535 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:58:45.906409   21535 start.go:494] detecting cgroup driver to use...
	I0520 04:58:45.906482   21535 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:58:45.911870   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:58:45.916542   21535 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:58:45.922022   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:58:45.926118   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:58:45.930674   21535 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:58:45.972011   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:58:45.977125   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:58:45.982200   21535 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:58:45.983553   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:58:45.986330   21535 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:58:45.991260   21535 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:58:46.073029   21535 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:58:46.158514   21535 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:58:46.158587   21535 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:58:46.163869   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:46.249788   21535 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:58:47.385713   21535 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.135912958s)
	I0520 04:58:47.385778   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:58:47.390359   21535 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 04:58:47.396532   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:58:47.401733   21535 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:58:47.480969   21535 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:58:47.561256   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:47.639003   21535 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:58:47.644484   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:58:47.649065   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:47.728893   21535 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:58:47.766481   21535 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:58:47.766556   21535 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:58:47.768501   21535 start.go:562] Will wait 60s for crictl version
	I0520 04:58:47.768545   21535 ssh_runner.go:195] Run: which crictl
	I0520 04:58:47.770148   21535 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:58:47.785213   21535 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 04:58:47.785289   21535 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:58:47.806169   21535 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:58:47.816010   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:47.816117   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:47.827547   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:47.827614   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:47.839086   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:47.839159   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:47.850969   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:47.851044   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:47.862958   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:47.863032   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:47.875854   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:47.875929   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:47.888755   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:47.888827   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:47.900877   21370 logs.go:276] 0 containers: []
	W0520 04:58:47.900888   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:47.900954   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:47.914660   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:47.914681   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:47.914687   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:47.954350   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:47.954366   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:47.970426   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:47.970439   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:47.983161   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:47.983176   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:48.002103   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:48.002115   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:48.014580   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:48.014593   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:48.028137   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:48.028150   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:48.041154   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:48.041166   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:48.054791   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:48.054805   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:48.067925   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:48.067937   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:48.073170   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:48.073182   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:48.111837   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:48.111850   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:48.138501   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:48.138516   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:48.153481   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:48.153493   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:48.168203   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:48.168216   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:48.183713   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:48.183724   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:48.201437   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:48.201447   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:47.822216   21535 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 04:58:47.822291   21535 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 04:58:47.823701   21535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:58:47.827896   21535 kubeadm.go:877] updating cluster {Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 04:58:47.827946   21535 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:58:47.827975   21535 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:58:47.839600   21535 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:58:47.839609   21535 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:58:47.839637   21535 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:58:47.843656   21535 ssh_runner.go:195] Run: which lz4
	I0520 04:58:47.845058   21535 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 04:58:47.846408   21535 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 04:58:47.846423   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 04:58:48.638132   21535 docker.go:649] duration metric: took 793.107292ms to copy over tarball
	I0520 04:58:48.638187   21535 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:58:49.849071   21535 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.210875958s)
	I0520 04:58:49.849088   21535 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:58:49.866678   21535 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:58:49.869989   21535 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 04:58:49.875246   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:49.958525   21535 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:58:50.726906   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:51.502955   21535 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5444235s)
	I0520 04:58:51.503054   21535 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:58:51.526060   21535 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:58:51.526069   21535 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:58:51.526074   21535 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 04:58:51.532405   21535 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.532484   21535 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.532534   21535 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:51.532533   21535 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:51.532609   21535 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:51.532618   21535 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:51.532865   21535 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:58:51.532898   21535 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:51.540512   21535 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:58:51.540599   21535 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.540665   21535 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.540780   21535 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:51.541196   21535 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:51.541377   21535 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:51.541386   21535 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:51.541474   21535 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:51.952812   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 04:58:51.959888   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.966441   21535 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 04:58:51.966461   21535 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 04:58:51.966514   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 04:58:51.968232   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.972076   21535 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 04:58:51.972095   21535 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.972143   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.980765   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:51.989363   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:58:51.989480   21535 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0520 04:58:51.991476   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 04:58:51.991577   21535 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 04:58:51.991595   21535 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.991634   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:52.000352   21535 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 04:58:52.000357   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 04:58:52.000380   21535 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:52.000389   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 04:58:52.000421   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0520 04:58:52.000964   21535 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:58:52.001073   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:52.009099   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 04:58:52.016860   21535 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 04:58:52.016874   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 04:58:52.019783   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 04:58:52.019836   21535 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 04:58:52.019851   21535 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:52.019898   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:52.020100   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:52.044694   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:52.065165   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 04:58:52.065201   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 04:58:52.065219   21535 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 04:58:52.065237   21535 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:52.065284   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:52.065304   21535 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 04:58:52.065306   21535 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:58:52.065316   21535 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:52.065342   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:52.066714   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 04:58:52.066731   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 04:58:52.095724   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:58:52.095821   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 04:58:52.095847   21535 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:58:52.103673   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 04:58:52.103697   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 04:58:52.105332   21535 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:58:52.105340   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 04:58:52.240930   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 04:58:52.289646   21535 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:58:52.289659   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0520 04:58:52.356214   21535 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:58:52.356331   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:52.434111   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0520 04:58:52.434134   21535 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 04:58:52.434161   21535 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:52.434216   21535 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:52.448298   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:58:52.448410   21535 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:58:52.449933   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 04:58:52.449948   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 04:58:52.479750   21535 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:58:52.479763   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 04:58:52.716386   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 04:58:52.716427   21535 cache_images.go:92] duration metric: took 1.1903555s to LoadCachedImages
	W0520 04:58:52.716469   21535 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0520 04:58:52.716475   21535 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 04:58:52.716527   21535 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-298000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:58:52.716587   21535 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:58:52.735249   21535 cni.go:84] Creating CNI manager for ""
	I0520 04:58:52.735261   21535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:58:52.735268   21535 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:58:52.735332   21535 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-298000 NodeName:stopped-upgrade-298000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:58:52.735408   21535 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-298000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:58:52.735461   21535 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 04:58:52.738957   21535 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:58:52.738986   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 04:58:52.741592   21535 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 04:58:52.746388   21535 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:58:52.751281   21535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 04:58:52.757038   21535 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 04:58:52.758418   21535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:58:52.761883   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:52.843145   21535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:58:52.850221   21535 certs.go:68] Setting up /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000 for IP: 10.0.2.15
	I0520 04:58:52.850229   21535 certs.go:194] generating shared ca certs ...
	I0520 04:58:52.850238   21535 certs.go:226] acquiring lock for ca certs: {Name:mk319383c68f33c5310e8442d826dee5d3ed7b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:52.850402   21535 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.key
	I0520 04:58:52.850437   21535 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.key
	I0520 04:58:52.850442   21535 certs.go:256] generating profile certs ...
	I0520 04:58:52.850508   21535 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.key
	I0520 04:58:52.850526   21535 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7
	I0520 04:58:52.850537   21535 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 04:58:53.022678   21535 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7 ...
	I0520 04:58:53.022689   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7: {Name:mk7049d0be65a263299d9c17e36039183748ec76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:53.023611   21535 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7 ...
	I0520 04:58:53.023620   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7: {Name:mk09b4e706952e42d7f87718e4d179ce5362915a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:53.023770   21535 certs.go:381] copying /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt
	I0520 04:58:53.023903   21535 certs.go:385] copying /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key
	I0520 04:58:53.024042   21535 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/proxy-client.key
	I0520 04:58:53.024171   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517.pem (1338 bytes)
	W0520 04:58:53.024191   21535 certs.go:480] ignoring /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517_empty.pem, impossibly tiny 0 bytes
	I0520 04:58:53.024196   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 04:58:53.024219   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem (1082 bytes)
	I0520 04:58:53.024237   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem (1123 bytes)
	I0520 04:58:53.024257   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem (1675 bytes)
	I0520 04:58:53.024294   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem (1708 bytes)
	I0520 04:58:53.024616   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:58:53.031801   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:58:53.039620   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:58:53.046418   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:58:53.053191   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 04:58:53.059945   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 04:58:53.067138   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:58:53.074020   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 04:58:53.080488   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem --> /usr/share/ca-certificates/195172.pem (1708 bytes)
	I0520 04:58:53.087453   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:58:53.094189   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517.pem --> /usr/share/ca-certificates/19517.pem (1338 bytes)
	I0520 04:58:53.100787   21535 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:58:53.105774   21535 ssh_runner.go:195] Run: openssl version
	I0520 04:58:53.107600   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19517.pem && ln -fs /usr/share/ca-certificates/19517.pem /etc/ssl/certs/19517.pem"
	I0520 04:58:53.110805   21535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19517.pem
	I0520 04:58:53.112243   21535 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 11:42 /usr/share/ca-certificates/19517.pem
	I0520 04:58:53.112266   21535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19517.pem
	I0520 04:58:53.114088   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19517.pem /etc/ssl/certs/51391683.0"
	I0520 04:58:53.116928   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195172.pem && ln -fs /usr/share/ca-certificates/195172.pem /etc/ssl/certs/195172.pem"
	I0520 04:58:53.120197   21535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195172.pem
	I0520 04:58:53.121633   21535 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 11:42 /usr/share/ca-certificates/195172.pem
	I0520 04:58:53.121656   21535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195172.pem
	I0520 04:58:53.123279   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195172.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:58:53.126271   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:58:53.129085   21535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:58:53.130509   21535 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:54 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:58:53.130528   21535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:58:53.132211   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:58:53.135444   21535 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:58:53.136921   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 04:58:53.139121   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 04:58:53.141073   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 04:58:53.143039   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 04:58:53.144734   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 04:58:53.146504   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 04:58:53.148277   21535 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:58:53.148340   21535 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:58:53.158428   21535 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 04:58:53.161291   21535 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 04:58:53.161297   21535 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 04:58:53.161300   21535 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 04:58:53.161322   21535 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 04:58:53.164101   21535 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:58:53.164393   21535 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-298000" does not appear in /Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:58:53.164486   21535 kubeconfig.go:62] /Users/jenkins/minikube-integration/18929-19024/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-298000" cluster setting kubeconfig missing "stopped-upgrade-298000" context setting]
	I0520 04:58:53.164668   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/kubeconfig: {Name:mk3ada957134ebfd6ba10dc19bcfe4b23657e56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:53.165087   21535 kapi.go:59] client config for stopped-upgrade-298000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.key", CAFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10586c580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:58:53.165395   21535 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 04:58:53.168123   21535 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-298000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 04:58:53.168129   21535 kubeadm.go:1154] stopping kube-system containers ...
	I0520 04:58:53.168168   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:58:53.178927   21535 docker.go:483] Stopping containers: [9fc3ef3cc4af 8c289c175a53 1c19435c85dd aa9323402490 6730da3d3f1a c9cc7b978cad b2100d7c0bd2 df4e1107aafa]
	I0520 04:58:53.178993   21535 ssh_runner.go:195] Run: docker stop 9fc3ef3cc4af 8c289c175a53 1c19435c85dd aa9323402490 6730da3d3f1a c9cc7b978cad b2100d7c0bd2 df4e1107aafa
	I0520 04:58:53.189128   21535 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 04:58:53.194841   21535 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:58:53.197519   21535 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:58:53.197524   21535 kubeadm.go:156] found existing configuration files:
	
	I0520 04:58:53.197543   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf
	I0520 04:58:53.200215   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:58:53.200236   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:58:53.203085   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf
	I0520 04:58:53.205637   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:58:53.205669   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:58:53.208165   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf
	I0520 04:58:53.211033   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:58:53.211053   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:58:53.213597   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf
	I0520 04:58:53.216081   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:58:53.216100   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:58:53.219138   21535 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:58:53.221709   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.246003   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.734300   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.864798   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.895106   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.919515   21535 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:58:53.919591   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:58:54.421763   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:58:54.921679   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:58:54.926089   21535 api_server.go:72] duration metric: took 1.006583583s to wait for apiserver process to appear ...
	I0520 04:58:54.926098   21535 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:58:54.926106   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:55.729294   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:55.729438   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:58:55.740933   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:58:55.741016   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:58:55.752142   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:58:55.752208   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:58:55.762797   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:58:55.762867   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:58:55.773420   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:58:55.773496   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:58:55.783829   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:58:55.783895   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:58:55.794137   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:58:55.794192   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:58:55.804461   21370 logs.go:276] 0 containers: []
	W0520 04:58:55.804474   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:58:55.804528   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:58:55.815118   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:58:55.815135   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:58:55.815139   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:58:55.828684   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:58:55.828694   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:58:55.852105   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:58:55.852113   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:58:55.887998   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:58:55.888004   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:58:55.901954   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:58:55.901966   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:58:55.926893   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:58:55.926907   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:58:55.946287   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:58:55.946297   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:58:55.957993   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:58:55.958003   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:58:55.974947   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:58:55.974957   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:58:55.986425   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:58:55.986434   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:58:55.997939   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:58:55.997951   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:58:56.002747   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:58:56.002765   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:58:56.040826   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:58:56.040837   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:58:56.055940   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:58:56.055950   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:58:56.067848   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:58:56.067861   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:58:56.079688   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:58:56.079698   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:58:56.091496   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:58:56.091507   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:58:58.604986   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:59.928216   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:59.928258   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:03.607155   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:03.607353   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:59:03.626265   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:59:03.626366   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:59:03.640783   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:59:03.640861   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:59:03.652332   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:59:03.652423   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:59:03.663048   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:59:03.663119   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:59:03.677356   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:59:03.677422   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:59:03.687563   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:59:03.687633   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:59:03.697615   21370 logs.go:276] 0 containers: []
	W0520 04:59:03.697626   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:59:03.697682   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:59:03.708036   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:59:03.708056   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:59:03.708061   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:59:03.722568   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:59:03.722583   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:59:03.736768   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:59:03.736779   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:59:03.760729   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:59:03.760737   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:59:03.775424   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:59:03.775437   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:59:03.795144   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:59:03.795158   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:59:03.806337   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:59:03.806346   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:59:03.817640   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:59:03.817651   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:59:03.829736   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:59:03.829747   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:59:03.833989   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:59:03.833995   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:59:03.867471   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:59:03.867481   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:59:03.884684   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:59:03.884698   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:59:03.900267   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:59:03.900278   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:59:03.912219   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:59:03.912230   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:59:03.947772   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:59:03.947780   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:59:03.973226   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:59:03.973244   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:59:03.986304   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:59:03.986317   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:59:04.928499   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:04.928538   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:06.506958   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:09.929301   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:09.929351   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:11.509167   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:11.509279   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:59:11.520872   21370 logs.go:276] 2 containers: [c0acb6c53ba2 317e103732b9]
	I0520 04:59:11.520949   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:59:11.532199   21370 logs.go:276] 2 containers: [8a6a95e6d769 87218e8ecbeb]
	I0520 04:59:11.532269   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:59:11.544056   21370 logs.go:276] 1 containers: [86bb396827f1]
	I0520 04:59:11.544130   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:59:11.555457   21370 logs.go:276] 2 containers: [9ee8977e1513 dc7f1ac48726]
	I0520 04:59:11.555530   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:59:11.571276   21370 logs.go:276] 1 containers: [7e952312c482]
	I0520 04:59:11.571352   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:59:11.584595   21370 logs.go:276] 2 containers: [43bb19c378e6 3e8334495368]
	I0520 04:59:11.584677   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:59:11.600186   21370 logs.go:276] 0 containers: []
	W0520 04:59:11.600201   21370 logs.go:278] No container was found matching "kindnet"
	I0520 04:59:11.600272   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:59:11.611391   21370 logs.go:276] 2 containers: [ba8b80452cf8 9e73b8a4277e]
	I0520 04:59:11.611413   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 04:59:11.611419   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:59:11.650460   21370 logs.go:123] Gathering logs for kube-scheduler [9ee8977e1513] ...
	I0520 04:59:11.650475   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ee8977e1513"
	I0520 04:59:11.662899   21370 logs.go:123] Gathering logs for kube-controller-manager [3e8334495368] ...
	I0520 04:59:11.662915   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8334495368"
	I0520 04:59:11.675767   21370 logs.go:123] Gathering logs for storage-provisioner [ba8b80452cf8] ...
	I0520 04:59:11.675781   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8b80452cf8"
	I0520 04:59:11.689064   21370 logs.go:123] Gathering logs for storage-provisioner [9e73b8a4277e] ...
	I0520 04:59:11.689077   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e73b8a4277e"
	I0520 04:59:11.701968   21370 logs.go:123] Gathering logs for kube-apiserver [c0acb6c53ba2] ...
	I0520 04:59:11.701981   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0acb6c53ba2"
	I0520 04:59:11.718222   21370 logs.go:123] Gathering logs for etcd [8a6a95e6d769] ...
	I0520 04:59:11.718232   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a6a95e6d769"
	I0520 04:59:11.732572   21370 logs.go:123] Gathering logs for coredns [86bb396827f1] ...
	I0520 04:59:11.732588   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bb396827f1"
	I0520 04:59:11.744934   21370 logs.go:123] Gathering logs for kube-proxy [7e952312c482] ...
	I0520 04:59:11.744946   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e952312c482"
	I0520 04:59:11.756916   21370 logs.go:123] Gathering logs for kube-controller-manager [43bb19c378e6] ...
	I0520 04:59:11.756931   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43bb19c378e6"
	I0520 04:59:11.778448   21370 logs.go:123] Gathering logs for container status ...
	I0520 04:59:11.778469   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:59:11.790952   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 04:59:11.790965   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:59:11.795101   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:59:11.795108   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:59:11.830329   21370 logs.go:123] Gathering logs for kube-apiserver [317e103732b9] ...
	I0520 04:59:11.830343   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317e103732b9"
	I0520 04:59:11.855197   21370 logs.go:123] Gathering logs for Docker ...
	I0520 04:59:11.855211   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:59:11.879236   21370 logs.go:123] Gathering logs for etcd [87218e8ecbeb] ...
	I0520 04:59:11.879244   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87218e8ecbeb"
	I0520 04:59:11.893041   21370 logs.go:123] Gathering logs for kube-scheduler [dc7f1ac48726] ...
	I0520 04:59:11.893056   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc7f1ac48726"
	I0520 04:59:14.413997   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:14.930062   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:14.930116   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:19.416006   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:19.416126   21370 kubeadm.go:591] duration metric: took 4m4.329975333s to restartPrimaryControlPlane
	W0520 04:59:19.416211   21370 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 04:59:19.416249   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 04:59:19.930953   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:19.930972   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:20.413672   21370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:59:20.418820   21370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:59:20.421427   21370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:59:20.424759   21370 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:59:20.424765   21370 kubeadm.go:156] found existing configuration files:
	
	I0520 04:59:20.424787   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/admin.conf
	I0520 04:59:20.427739   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:59:20.427761   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:59:20.430540   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/kubelet.conf
	I0520 04:59:20.432934   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:59:20.432954   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:59:20.436185   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/controller-manager.conf
	I0520 04:59:20.439105   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:59:20.439132   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:59:20.441604   21370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/scheduler.conf
	I0520 04:59:20.444543   21370 kubeadm.go:162] "https://control-plane.minikube.internal:53952" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53952 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:59:20.444561   21370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:59:20.447544   21370 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:59:20.465438   21370 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 04:59:20.465465   21370 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:59:20.521736   21370 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:59:20.521789   21370 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:59:20.521853   21370 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:59:20.578453   21370 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:59:20.584492   21370 out.go:204]   - Generating certificates and keys ...
	I0520 04:59:20.584530   21370 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:59:20.584557   21370 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:59:20.584597   21370 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 04:59:20.584669   21370 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 04:59:20.584717   21370 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 04:59:20.584755   21370 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 04:59:20.584794   21370 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 04:59:20.584866   21370 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 04:59:20.584969   21370 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 04:59:20.585041   21370 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 04:59:20.585063   21370 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 04:59:20.585104   21370 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:59:20.643393   21370 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:59:20.719269   21370 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:59:20.913872   21370 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:59:21.049483   21370 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:59:21.080902   21370 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:59:21.081992   21370 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:59:21.082014   21370 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:59:21.166310   21370 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:59:21.170518   21370 out.go:204]   - Booting up control plane ...
	I0520 04:59:21.170569   21370 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:59:21.170612   21370 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:59:21.170647   21370 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:59:21.170693   21370 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:59:21.170777   21370 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 04:59:24.931978   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:24.932076   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:25.671214   21370 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502199 seconds
	I0520 04:59:25.671373   21370 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:59:25.675402   21370 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:59:26.184306   21370 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:59:26.184395   21370 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-158000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:59:26.691840   21370 kubeadm.go:309] [bootstrap-token] Using token: vtrpym.bejbd6ufnp30co3p
	I0520 04:59:26.697994   21370 out.go:204]   - Configuring RBAC rules ...
	I0520 04:59:26.698067   21370 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:59:26.698764   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:59:26.704133   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:59:26.705237   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:59:26.706043   21370 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:59:26.706894   21370 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:59:26.710219   21370 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:59:26.891893   21370 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:59:27.101009   21370 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:59:27.101503   21370 kubeadm.go:309] 
	I0520 04:59:27.101534   21370 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:59:27.101538   21370 kubeadm.go:309] 
	I0520 04:59:27.101587   21370 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:59:27.101593   21370 kubeadm.go:309] 
	I0520 04:59:27.101606   21370 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:59:27.101638   21370 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:59:27.101664   21370 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:59:27.101667   21370 kubeadm.go:309] 
	I0520 04:59:27.101710   21370 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:59:27.101719   21370 kubeadm.go:309] 
	I0520 04:59:27.101748   21370 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:59:27.101753   21370 kubeadm.go:309] 
	I0520 04:59:27.101779   21370 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:59:27.101835   21370 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:59:27.101880   21370 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:59:27.101885   21370 kubeadm.go:309] 
	I0520 04:59:27.101927   21370 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:59:27.101975   21370 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:59:27.101979   21370 kubeadm.go:309] 
	I0520 04:59:27.102020   21370 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vtrpym.bejbd6ufnp30co3p \
	I0520 04:59:27.102109   21370 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc \
	I0520 04:59:27.102124   21370 kubeadm.go:309] 	--control-plane 
	I0520 04:59:27.102130   21370 kubeadm.go:309] 
	I0520 04:59:27.102173   21370 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:59:27.102180   21370 kubeadm.go:309] 
	I0520 04:59:27.102225   21370 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vtrpym.bejbd6ufnp30co3p \
	I0520 04:59:27.102289   21370 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc 
	I0520 04:59:27.102370   21370 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:59:27.102378   21370 cni.go:84] Creating CNI manager for ""
	I0520 04:59:27.102387   21370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:59:27.105775   21370 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 04:59:27.112708   21370 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 04:59:27.115704   21370 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 04:59:27.121919   21370 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:59:27.121976   21370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:59:27.121977   21370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-158000 minikube.k8s.io/updated_at=2024_05_20T04_59_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=running-upgrade-158000 minikube.k8s.io/primary=true
	I0520 04:59:27.168438   21370 ops.go:34] apiserver oom_adj: -16
	I0520 04:59:27.168436   21370 kubeadm.go:1107] duration metric: took 46.505667ms to wait for elevateKubeSystemPrivileges
	W0520 04:59:27.168559   21370 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:59:27.168565   21370 kubeadm.go:393] duration metric: took 4m12.096962583s to StartCluster
	I0520 04:59:27.168575   21370 settings.go:142] acquiring lock: {Name:mkb0015ab6abb1526406adb43e2b3d4392387c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:59:27.168729   21370 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:59:27.169086   21370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/kubeconfig: {Name:mk3ada957134ebfd6ba10dc19bcfe4b23657e56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:59:27.169284   21370 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:59:27.172886   21370 out.go:177] * Verifying Kubernetes components...
	I0520 04:59:27.169331   21370 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:59:27.169475   21370 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:59:27.180755   21370 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-158000"
	I0520 04:59:27.180759   21370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:59:27.180766   21370 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-158000"
	W0520 04:59:27.180769   21370 addons.go:243] addon storage-provisioner should already be in state true
	I0520 04:59:27.180778   21370 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0520 04:59:27.180781   21370 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-158000"
	I0520 04:59:27.180790   21370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-158000"
	I0520 04:59:27.181887   21370 kapi.go:59] client config for running-upgrade-158000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/running-upgrade-158000/client.key", CAFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040d0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:59:27.182797   21370 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-158000"
	W0520 04:59:27.182802   21370 addons.go:243] addon default-storageclass should already be in state true
	I0520 04:59:27.182811   21370 host.go:66] Checking if "running-upgrade-158000" exists ...
	I0520 04:59:27.186703   21370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:59:27.189798   21370 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:59:27.189804   21370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:59:27.189810   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0520 04:59:27.190547   21370 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:59:27.190552   21370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:59:27.190556   21370 sshutil.go:53] new ssh client: &{IP:localhost Port:53920 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/running-upgrade-158000/id_rsa Username:docker}
	I0520 04:59:27.271711   21370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:59:27.276446   21370 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:59:27.276483   21370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:59:27.280813   21370 api_server.go:72] duration metric: took 111.518875ms to wait for apiserver process to appear ...
	I0520 04:59:27.280821   21370 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:59:27.280828   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:27.302726   21370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:59:27.305548   21370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:59:29.933751   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:29.933802   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:32.282892   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:32.282935   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:34.935030   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:34.935051   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:37.283291   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:37.283329   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:39.937071   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:39.937137   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:42.283698   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:42.283719   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:44.939336   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:44.939384   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:47.284157   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:47.284200   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:49.941769   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:49.941836   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:52.285202   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:52.285246   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:54.944352   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:54.944632   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:59:54.975503   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 04:59:54.975638   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:59:54.999499   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 04:59:54.999589   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:59:55.012215   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 04:59:55.012288   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:59:55.023463   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 04:59:55.023528   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:59:55.036521   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 04:59:55.036592   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:59:55.048550   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 04:59:55.048624   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:59:55.059026   21535 logs.go:276] 0 containers: []
	W0520 04:59:55.059037   21535 logs.go:278] No container was found matching "kindnet"
	I0520 04:59:55.059095   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:59:57.286109   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:57.286157   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 04:59:57.674082   21370 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 04:59:57.677489   21370 out.go:177] * Enabled addons: storage-provisioner
	I0520 04:59:57.685333   21370 addons.go:505] duration metric: took 30.516250416s for enable addons: enabled=[storage-provisioner]
	I0520 04:59:55.069975   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 04:59:55.073559   21535 logs.go:123] Gathering logs for container status ...
	I0520 04:59:55.073565   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:59:55.087697   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 04:59:55.087708   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 04:59:55.103779   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 04:59:55.103791   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 04:59:55.121591   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 04:59:55.121600   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 04:59:55.135964   21535 logs.go:123] Gathering logs for Docker ...
	I0520 04:59:55.135974   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:59:55.161671   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 04:59:55.161688   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:59:55.202062   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 04:59:55.202070   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 04:59:55.216811   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 04:59:55.216825   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 04:59:55.259568   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 04:59:55.259581   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 04:59:55.275617   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 04:59:55.275634   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 04:59:55.287030   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 04:59:55.287039   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 04:59:55.298367   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 04:59:55.298380   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 04:59:55.310406   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 04:59:55.310415   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 04:59:55.321846   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 04:59:55.321856   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:59:55.326155   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:59:55.326161   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:59:55.440985   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 04:59:55.440999   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 04:59:55.455224   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 04:59:55.455237   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 04:59:57.969023   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:02.287435   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:02.287480   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:02.971404   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:02.971689   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:03.001701   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:03.001828   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:03.018530   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:03.018634   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:03.031153   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:03.031222   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:03.042119   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:03.042191   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:03.052189   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:03.052251   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:03.063201   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:03.063270   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:03.074276   21535 logs.go:276] 0 containers: []
	W0520 05:00:03.074292   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:03.074350   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:03.084810   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:03.084832   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:03.084837   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:03.127011   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:03.127027   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:03.165887   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:03.165901   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:03.182014   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:03.182025   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:03.193622   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:03.193633   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:03.205669   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:03.205680   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:03.221321   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:03.221333   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:03.232659   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:03.232668   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:03.250291   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:03.250301   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:03.264449   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:03.264459   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:03.280983   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:03.280992   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:03.296402   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:03.296412   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:03.321149   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:03.321157   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:03.359292   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:03.359300   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:03.363418   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:03.363423   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:03.378904   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:03.378914   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:03.393209   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:03.393220   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:07.288973   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:07.289015   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:05.907507   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:12.290870   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:12.290906   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:10.909970   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:10.910182   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:10.938324   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:10.938419   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:10.953268   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:10.953344   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:10.965437   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:10.965509   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:10.976370   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:10.976437   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:10.986939   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:10.987004   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:10.997307   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:10.997380   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:11.007452   21535 logs.go:276] 0 containers: []
	W0520 05:00:11.007464   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:11.007521   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:11.017948   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:11.017968   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:11.017975   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:11.030855   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:11.030869   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:11.035477   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:11.035484   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:11.052228   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:11.052237   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:11.064438   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:11.064448   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:11.076105   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:11.076116   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:11.096464   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:11.096473   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:11.114158   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:11.114169   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:11.128139   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:11.128150   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:11.145267   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:11.145280   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:11.160150   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:11.160162   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:11.200042   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:11.200053   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:11.211587   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:11.211599   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:11.236867   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:11.236877   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:11.251789   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:11.251802   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:11.264273   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:11.264283   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:11.302862   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:11.302872   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:13.842510   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:17.291477   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:17.291501   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:18.844764   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:18.844911   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:18.858804   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:18.858885   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:18.870465   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:18.870533   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:18.880833   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:18.880895   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:18.891442   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:18.891505   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:18.901405   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:18.901477   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:18.911622   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:18.911696   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:18.926301   21535 logs.go:276] 0 containers: []
	W0520 05:00:18.926310   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:18.926361   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:18.936135   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:18.936154   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:18.936159   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:18.949988   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:18.949999   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:18.961946   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:18.961958   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:18.987412   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:18.987423   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:18.991757   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:18.991764   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:19.011162   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:19.011174   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:19.022968   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:19.022979   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:19.034570   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:19.034581   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:19.071312   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:19.071325   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:19.088762   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:19.088771   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:19.102736   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:19.102749   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:19.123185   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:19.123196   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:19.134574   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:19.134585   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:19.146427   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:19.146437   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:19.164665   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:19.164675   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:19.178824   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:19.178836   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:19.217470   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:19.217490   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:22.293628   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:22.293650   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:21.758035   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:27.295787   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:27.295884   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:27.306533   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:27.306608   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:27.318511   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:27.318585   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:27.328993   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:27.329061   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:27.340550   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:27.340618   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:27.351266   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:27.351331   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:27.361552   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:27.361618   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:27.371917   21370 logs.go:276] 0 containers: []
	W0520 05:00:27.371927   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:27.371984   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:27.381911   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:27.381927   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:27.381933   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:27.393209   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:27.393225   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:27.430801   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:27.430809   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:27.434852   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:27.434858   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:27.473487   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:27.473498   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:27.485887   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:27.485897   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:27.500249   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:27.500257   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:27.517160   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:27.517170   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:27.531494   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:27.531506   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:27.549672   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:27.549685   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:27.561746   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:27.561757   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:27.574346   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:27.574356   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:27.586749   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:27.586763   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:26.760309   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:26.760565   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:26.784786   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:26.784895   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:26.801733   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:26.801814   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:26.815021   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:26.815097   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:26.826491   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:26.826570   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:26.837055   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:26.837121   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:26.846984   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:26.847056   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:26.856471   21535 logs.go:276] 0 containers: []
	W0520 05:00:26.856484   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:26.856541   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:26.866907   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:26.866926   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:26.866933   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:26.878287   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:26.878298   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:26.897836   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:26.897849   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:26.909610   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:26.909621   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:26.913654   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:26.913662   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:26.950054   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:26.950065   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:26.964763   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:26.964774   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:26.977158   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:26.977171   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:26.991660   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:26.991670   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:27.032796   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:27.032814   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:27.044288   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:27.044302   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:27.059544   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:27.059559   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:27.097218   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:27.097233   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:27.112545   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:27.112561   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:27.127132   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:27.127146   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:27.143914   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:27.143924   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:27.155096   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:27.155106   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:29.681691   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:30.112425   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:34.684033   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:34.684227   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:34.701513   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:34.701597   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:34.714807   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:34.714880   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:34.725972   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:34.726043   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:34.736612   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:34.736681   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:34.746563   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:34.746618   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:34.757401   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:34.757466   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:34.767773   21535 logs.go:276] 0 containers: []
	W0520 05:00:34.767785   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:34.767846   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:34.778712   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:34.778731   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:34.778737   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:34.792066   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:34.792076   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:34.803618   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:34.803629   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:34.817312   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:34.817322   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:34.836415   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:34.836424   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:34.849902   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:34.849912   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:34.868142   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:34.868152   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:34.882293   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:34.882303   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:34.894292   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:34.894301   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:34.932961   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:34.932973   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:34.937131   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:34.937140   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:34.951836   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:34.951845   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:34.968262   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:34.968271   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:34.981431   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:34.981441   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:34.992851   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:34.992859   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:35.016018   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:35.016025   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:35.052478   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:35.052488   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:35.113152   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:35.113230   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:35.123813   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:35.123883   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:35.134367   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:35.134437   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:35.145006   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:35.145078   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:35.155347   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:35.155408   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:35.165421   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:35.165489   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:35.175330   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:35.175397   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:35.185793   21370 logs.go:276] 0 containers: []
	W0520 05:00:35.185819   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:35.185877   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:35.196250   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:35.196268   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:35.196274   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:35.235557   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:35.235566   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:35.239934   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:35.239944   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:35.278326   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:35.278338   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:35.293741   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:35.293752   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:35.305104   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:35.305115   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:35.320542   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:35.320554   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:35.344892   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:35.344902   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:35.358686   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:35.358697   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:35.373183   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:35.373194   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:35.386273   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:35.386285   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:35.401429   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:35.401441   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:35.412863   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:35.412876   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:37.932886   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:37.593147   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:42.933472   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:42.933558   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:42.945765   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:42.945842   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:42.960129   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:42.960202   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:42.971024   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:42.971098   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:42.981420   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:42.981488   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:42.993236   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:42.993315   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:43.004958   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:43.005035   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:43.016241   21370 logs.go:276] 0 containers: []
	W0520 05:00:43.016253   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:43.016310   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:43.026569   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:43.026586   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:43.026593   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:43.063324   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:43.063332   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:43.067615   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:43.067620   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:43.081957   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:43.081971   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:43.095253   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:43.095263   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:43.106658   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:43.106672   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:43.121085   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:43.121098   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:43.144640   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:43.144647   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:43.179048   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:43.179062   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:43.191083   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:43.191097   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:43.202266   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:43.202290   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:43.219338   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:43.219352   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:43.230754   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:43.230768   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:42.595753   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:42.595942   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:42.612287   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:42.612368   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:42.626442   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:42.626510   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:42.636427   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:42.636489   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:42.646896   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:42.646968   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:42.657144   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:42.657213   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:42.667752   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:42.667821   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:42.678225   21535 logs.go:276] 0 containers: []
	W0520 05:00:42.678238   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:42.678297   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:42.688639   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:42.688659   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:42.688664   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:42.726485   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:42.726491   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:42.744541   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:42.744551   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:42.769431   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:42.769439   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:42.773847   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:42.773853   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:42.789263   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:42.789274   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:42.803727   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:42.803741   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:42.814996   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:42.815008   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:42.832548   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:42.832562   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:42.867596   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:42.867607   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:42.879827   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:42.879840   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:42.891247   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:42.891261   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:42.909257   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:42.909270   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:42.920861   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:42.920873   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:42.935823   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:42.935830   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:42.982969   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:42.982980   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:42.995401   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:42.995410   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:45.744035   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:45.509265   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:50.745048   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:50.745123   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:50.756583   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:50.756659   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:50.773032   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:50.773104   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:50.787397   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:50.787467   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:50.799726   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:50.799802   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:50.811521   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:50.811609   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:50.823388   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:50.823461   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:50.834616   21370 logs.go:276] 0 containers: []
	W0520 05:00:50.834629   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:50.834690   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:50.846142   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:50.846157   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:50.846163   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:50.871014   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:50.871025   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:50.875537   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:50.875548   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:50.912980   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:50.912993   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:50.928285   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:50.928294   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:50.948270   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:50.948284   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:50.964475   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:50.964484   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:50.981476   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:50.981485   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:50.996973   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:50.996983   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:51.035949   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:51.035959   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:51.058148   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:51.058158   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:51.069880   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:51.069892   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:51.081494   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:51.081504   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:53.594916   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:50.511969   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:50.512204   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:50.539002   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:50.539119   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:50.555845   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:50.555929   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:50.569298   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:50.569377   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:50.580574   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:50.580634   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:50.591032   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:50.591089   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:50.605314   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:50.605385   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:50.615778   21535 logs.go:276] 0 containers: []
	W0520 05:00:50.615791   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:50.615850   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:50.627016   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:50.627033   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:50.627038   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:50.639434   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:50.639448   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:50.678712   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:50.678721   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:50.693648   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:50.693660   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:50.717224   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:50.717234   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:50.729019   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:50.729029   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:50.752246   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:50.752262   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:50.764171   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:50.764184   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:50.777012   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:50.777024   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:50.792206   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:50.792218   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:50.807117   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:50.807128   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:50.846559   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:50.846567   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:50.886820   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:50.886841   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:50.901762   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:50.901778   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:50.914918   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:50.914930   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:50.927451   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:50.927468   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:50.932101   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:50.932114   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:53.449115   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:58.597083   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:58.597188   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:58.609222   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:00:58.609286   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:58.620679   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:00:58.620755   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:58.631948   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:00:58.632026   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:58.643036   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:00:58.643119   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:58.655491   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:00:58.655571   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:58.667270   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:00:58.667343   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:58.679259   21370 logs.go:276] 0 containers: []
	W0520 05:00:58.679269   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:58.679331   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:58.690331   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:00:58.690346   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:00:58.690350   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:00:58.709103   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:00:58.709116   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:00:58.722429   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:00:58.722439   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:58.734716   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:58.734727   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:58.739198   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:00:58.739211   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:00:58.752351   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:00:58.752363   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:00:58.768401   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:00:58.768412   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:00:58.789523   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:00:58.789531   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:00:58.806649   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:00:58.806661   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:00:58.820194   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:00:58.820208   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:00:58.835962   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:58.835976   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:58.860815   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:58.860825   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:58.898490   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:58.898498   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:58.450292   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:58.450500   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:58.473680   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:58.473769   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:58.486703   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:58.486780   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:58.498955   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:58.499024   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:58.509474   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:58.509548   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:58.524170   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:58.524234   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:58.534959   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:58.535026   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:58.545037   21535 logs.go:276] 0 containers: []
	W0520 05:00:58.545050   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:58.545102   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:58.559595   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:58.559617   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:58.559622   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:58.571921   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:58.571932   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:58.585214   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:58.585225   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:58.603071   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:58.603081   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:58.627672   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:58.627690   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:58.632632   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:58.632642   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:58.645374   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:58.645384   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:58.664284   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:58.664295   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:58.704027   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:58.704039   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:58.720513   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:58.720524   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:58.735629   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:58.735638   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:58.748118   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:58.748133   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:58.787697   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:58.787711   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:58.802671   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:58.802686   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:58.814586   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:58.814598   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:58.829011   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:58.829023   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:58.846741   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:58.846751   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:01.435417   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:01.388521   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:06.437817   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:06.437974   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:06.455137   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:06.455219   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:06.469456   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:06.469529   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:06.481921   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:06.481988   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:06.494313   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:06.494370   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:06.506217   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:06.506279   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:06.523679   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:06.523750   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:06.535918   21370 logs.go:276] 0 containers: []
	W0520 05:01:06.535928   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:06.535981   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:06.547954   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:06.547971   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:06.547977   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:06.565258   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:06.565269   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:06.584777   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:06.584786   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:06.609677   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:06.609695   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:06.648496   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:06.648514   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:06.656036   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:06.656049   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:06.672928   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:06.672940   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:06.688707   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:06.688718   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:06.707317   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:06.707330   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:06.719830   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:06.719842   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:06.759507   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:06.759519   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:06.773163   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:06.773171   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:06.791575   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:06.791584   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:09.307389   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:06.390896   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:06.391288   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:06.424067   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:06.424202   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:06.443891   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:06.443977   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:06.459405   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:06.459477   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:06.474862   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:06.474934   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:06.492126   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:06.492194   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:06.504121   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:06.504193   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:06.515892   21535 logs.go:276] 0 containers: []
	W0520 05:01:06.515905   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:06.515962   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:06.527881   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:06.527899   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:06.527904   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:06.540272   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:06.540288   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:06.566799   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:06.566807   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:06.581748   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:06.581761   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:06.602468   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:06.602480   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:06.618221   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:06.618233   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:06.666126   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:06.666146   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:06.682744   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:06.682761   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:06.701434   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:06.701450   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:06.722290   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:06.722300   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:06.734435   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:06.734447   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:06.772606   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:06.772619   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:06.777574   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:06.777587   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:06.791266   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:06.791277   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:06.803665   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:06.803677   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:06.824207   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:06.824221   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:06.835654   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:06.835664   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:09.376446   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:14.309717   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:14.309927   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:14.333204   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:14.333323   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:14.348529   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:14.348607   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:14.361461   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:14.361526   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:14.373669   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:14.373744   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:14.385551   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:14.385621   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:14.398387   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:14.398462   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:14.410454   21370 logs.go:276] 0 containers: []
	W0520 05:01:14.410466   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:14.410523   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:14.422918   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:14.422933   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:14.422939   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:14.436505   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:14.436515   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:14.450122   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:14.450136   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:14.476713   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:14.476723   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:14.490234   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:14.490252   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:14.528305   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:14.528321   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:14.543492   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:14.543505   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:14.556602   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:14.556611   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:14.573393   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:14.573404   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:14.378626   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:14.378687   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:14.389930   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:14.390003   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:14.405587   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:14.405660   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:14.416718   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:14.416789   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:14.428083   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:14.428153   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:14.439400   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:14.439475   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:14.450670   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:14.450739   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:14.461274   21535 logs.go:276] 0 containers: []
	W0520 05:01:14.461286   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:14.461349   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:14.475888   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:14.475909   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:14.475915   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:14.480612   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:14.480621   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:14.517601   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:14.517613   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:14.540886   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:14.540904   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:14.553869   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:14.553881   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:14.578899   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:14.578914   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:14.619706   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:14.619715   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:14.634842   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:14.634853   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:14.648122   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:14.648135   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:14.664102   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:14.664113   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:14.689467   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:14.689479   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:14.701938   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:14.701950   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:14.719275   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:14.719289   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:14.733295   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:14.733308   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:14.744803   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:14.744812   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:14.783944   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:14.783957   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:14.795635   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:14.795646   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:14.596494   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:14.599211   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:14.637256   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:14.637267   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:14.642112   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:14.642123   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:14.669956   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:14.669969   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:17.194251   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:17.312262   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:22.196500   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:22.196871   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:22.236503   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:22.236625   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:22.255436   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:22.255519   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:22.269295   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:22.269370   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:22.282135   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:22.282212   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:22.293539   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:22.293614   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:22.312954   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:22.313020   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:22.325019   21370 logs.go:276] 0 containers: []
	W0520 05:01:22.325027   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:22.325058   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:22.337422   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:22.337437   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:22.337444   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:22.379105   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:22.379124   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:22.394682   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:22.394694   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:22.410900   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:22.410909   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:22.429772   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:22.429782   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:22.442704   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:22.442715   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:22.468078   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:22.468089   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:22.482013   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:22.482026   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:22.486867   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:22.486878   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:22.527246   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:22.527261   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:22.547060   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:22.547072   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:22.560978   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:22.560992   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:22.574545   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:22.574557   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:22.313034   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:22.313081   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:22.324155   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:22.324232   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:22.337555   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:22.337621   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:22.349760   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:22.349822   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:22.361839   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:22.361905   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:22.373063   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:22.373124   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:22.384993   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:22.385061   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:22.396158   21535 logs.go:276] 0 containers: []
	W0520 05:01:22.396169   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:22.396233   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:22.408213   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:22.408232   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:22.408237   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:22.433768   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:22.433778   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:22.474907   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:22.474924   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:22.493227   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:22.493239   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:22.509788   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:22.509801   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:22.522182   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:22.522198   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:22.537143   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:22.537155   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:22.551646   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:22.551656   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:22.565261   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:22.565277   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:22.577638   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:22.577652   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:22.593764   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:22.593782   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:22.632493   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:22.632508   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:22.646918   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:22.646929   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:22.658386   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:22.658400   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:22.671897   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:22.671915   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:22.684117   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:22.684133   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:22.688788   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:22.688794   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:25.094981   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:25.227304   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:30.097333   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:30.097545   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:30.120173   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:30.120267   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:30.135365   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:30.135439   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:30.148315   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:30.148376   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:30.159406   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:30.159475   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:30.170520   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:30.170589   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:30.181544   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:30.181607   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:30.192735   21370 logs.go:276] 0 containers: []
	W0520 05:01:30.192745   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:30.192801   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:30.203620   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:30.203636   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:30.203642   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:30.241343   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:30.241354   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:30.258211   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:30.258224   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:30.274678   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:30.274692   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:30.288635   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:30.288644   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:30.308168   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:30.308180   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:30.322187   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:30.322199   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:30.347346   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:30.347363   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:30.386344   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:30.386359   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:30.392860   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:30.392876   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:30.408471   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:30.408485   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:30.425299   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:30.425311   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:30.439380   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:30.439395   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:32.954496   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:30.229511   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:30.229615   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:30.241115   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:30.241194   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:30.253065   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:30.253137   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:30.264454   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:30.264529   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:30.275893   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:30.275959   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:30.287368   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:30.287436   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:30.299877   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:30.299953   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:30.310730   21535 logs.go:276] 0 containers: []
	W0520 05:01:30.310740   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:30.310798   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:30.322549   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:30.322566   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:30.322570   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:30.341809   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:30.341818   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:30.354961   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:30.354971   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:30.373880   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:30.373890   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:30.390050   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:30.390061   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:30.430063   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:30.430076   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:30.434474   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:30.434483   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:30.475032   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:30.475042   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:30.486533   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:30.486549   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:30.502570   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:30.502583   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:30.517954   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:30.517966   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:30.531303   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:30.531318   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:30.553945   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:30.553952   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:30.593151   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:30.593164   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:30.607507   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:30.607521   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:30.618665   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:30.618679   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:30.630595   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:30.630606   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:33.143425   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:37.956894   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:37.957389   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:37.971735   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:37.971816   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:37.990580   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:37.990650   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:38.001272   21370 logs.go:276] 2 containers: [4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:38.001338   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:38.015234   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:38.015301   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:38.025570   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:38.025643   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:38.036136   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:38.036195   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:38.050312   21370 logs.go:276] 0 containers: []
	W0520 05:01:38.050323   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:38.050385   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:38.060817   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:38.060833   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:38.060838   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:38.065816   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:38.065824   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:38.079592   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:38.079603   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:38.092703   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:38.092715   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:38.104618   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:38.104631   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:38.119448   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:38.119460   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:38.132205   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:38.132218   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:38.170339   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:38.170351   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:38.207847   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:38.207857   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:38.220171   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:38.220182   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:38.232817   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:38.232828   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:38.251915   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:38.251926   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:38.278690   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:38.278702   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:38.145744   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:38.145850   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:38.157370   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:38.157446   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:38.168397   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:38.168468   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:38.179988   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:38.180111   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:38.191083   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:38.191159   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:38.207494   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:38.207567   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:38.224563   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:38.224639   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:38.235692   21535 logs.go:276] 0 containers: []
	W0520 05:01:38.235705   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:38.235774   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:38.247182   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:38.247200   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:38.247205   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:38.285927   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:38.285938   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:38.303105   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:38.303118   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:38.307754   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:38.307761   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:38.345958   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:38.345974   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:38.358188   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:38.358199   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:38.375436   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:38.375449   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:38.399295   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:38.399305   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:38.438653   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:38.438662   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:38.456670   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:38.456681   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:38.477659   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:38.477671   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:38.490345   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:38.490356   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:38.505034   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:38.505046   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:38.516620   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:38.516631   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:38.531497   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:38.531506   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:38.543702   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:38.543715   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:38.555223   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:38.555235   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:40.795166   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:41.068328   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:45.796864   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:45.797080   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:45.820566   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:45.820676   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:45.835305   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:45.835378   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:45.847670   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:45.847739   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:45.859261   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:45.859327   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:45.874772   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:45.874844   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:45.885998   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:45.886065   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:45.896247   21370 logs.go:276] 0 containers: []
	W0520 05:01:45.896260   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:45.896336   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:45.909803   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:45.909824   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:45.909833   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:45.946519   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:45.946528   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:45.962686   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:01:45.962695   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:01:45.973725   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:45.973740   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:45.991700   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:45.991709   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:46.004202   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:46.004213   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:46.017723   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:46.017732   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:46.028829   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:46.028838   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:46.040068   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:46.040078   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:46.079596   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:46.079609   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:46.085021   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:01:46.085032   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:01:46.097610   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:46.097622   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:46.111326   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:46.111337   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:46.126986   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:46.126998   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:46.152145   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:46.152163   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:48.667340   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:46.070487   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:46.070614   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:46.081956   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:46.082033   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:46.093899   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:46.093977   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:46.105791   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:46.105867   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:46.117601   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:46.117697   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:46.130748   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:46.130817   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:46.141925   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:46.141991   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:46.152615   21535 logs.go:276] 0 containers: []
	W0520 05:01:46.152624   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:46.152686   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:46.164188   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:46.164209   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:46.164215   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:46.178163   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:46.178174   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:46.191415   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:46.191425   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:46.228828   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:46.228841   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:46.240776   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:46.240786   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:46.251939   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:46.251950   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:46.274830   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:46.274839   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:46.286482   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:46.286492   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:46.323318   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:46.323325   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:46.361187   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:46.361201   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:46.375569   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:46.375580   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:46.390061   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:46.390074   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:46.402251   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:46.402260   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:46.417645   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:46.417658   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:46.421716   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:46.421725   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:46.436991   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:46.437003   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:46.454095   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:46.454108   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:48.970195   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:53.670085   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:53.670334   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:53.693421   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:01:53.693533   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:53.708113   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:01:53.708188   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:53.720994   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:01:53.721067   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:53.731342   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:01:53.731406   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:53.741887   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:01:53.741954   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:53.752421   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:01:53.752490   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:53.762698   21370 logs.go:276] 0 containers: []
	W0520 05:01:53.762710   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:53.762768   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:53.773395   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:01:53.773411   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:01:53.773415   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:01:53.786000   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:53.786011   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:53.790763   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:01:53.790771   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:01:53.802927   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:01:53.802937   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:01:53.814748   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:01:53.814759   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:01:53.832747   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:53.832758   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:53.871107   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:01:53.871117   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:01:53.884952   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:01:53.884964   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:01:53.897029   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:01:53.897040   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:01:53.908591   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:01:53.908601   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:01:53.919666   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:01:53.919679   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:01:53.936864   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:53.936877   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:53.962417   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:01:53.962428   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:53.974758   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:53.974768   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:54.029316   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:01:54.029328   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:01:53.972388   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:53.972576   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:53.988098   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:53.988171   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:53.999507   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:53.999590   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:54.011107   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:54.011181   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:54.022351   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:54.022436   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:54.037571   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:54.037648   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:54.048716   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:54.048788   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:54.058939   21535 logs.go:276] 0 containers: []
	W0520 05:01:54.058955   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:54.059010   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:54.070000   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:54.070018   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:54.070023   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:54.107551   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:54.107560   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:54.121889   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:54.121904   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:54.135900   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:54.135913   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:54.147584   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:54.147593   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:54.164068   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:54.164081   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:54.175760   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:54.175772   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:54.187576   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:54.187589   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:54.191774   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:54.191804   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:54.203003   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:54.203013   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:54.214588   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:54.214597   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:54.228441   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:54.228455   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:54.240152   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:54.240161   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:54.263748   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:54.263756   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:54.299479   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:54.299490   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:54.337095   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:54.337107   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:54.354296   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:54.354307   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:56.546720   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:56.873389   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:01.549457   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:01.549843   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:01.581608   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:01.581739   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:01.601156   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:01.601255   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:01.615694   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:01.615777   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:01.627766   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:01.627835   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:01.638728   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:01.638799   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:01.649267   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:01.649340   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:01.660337   21370 logs.go:276] 0 containers: []
	W0520 05:02:01.660347   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:01.660409   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:01.671433   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:01.671451   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:01.671457   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:01.683719   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:01.683729   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:01.708495   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:01.708505   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:01.713495   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:01.713506   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:01.724939   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:01.724952   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:01.738995   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:01.739004   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:01.777802   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:01.777816   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:01.794861   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:01.794874   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:01.809934   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:01.809947   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:01.834182   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:01.834195   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:01.846000   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:01.846012   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:01.857541   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:01.857552   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:01.893465   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:01.893476   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:01.906427   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:01.906439   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:01.918728   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:01.918741   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:04.433394   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:01.875732   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:01.875842   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:01.887594   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:01.887667   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:01.899484   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:01.899561   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:01.911334   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:01.911405   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:01.923177   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:01.923253   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:01.933881   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:01.933951   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:01.945100   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:01.945166   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:01.955436   21535 logs.go:276] 0 containers: []
	W0520 05:02:01.955447   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:01.955500   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:01.966581   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:01.966601   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:01.966607   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:01.980185   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:01.980195   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:01.995458   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:01.995471   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:02.009837   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:02.009847   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:02.022042   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:02.022056   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:02.060962   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:02.060971   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:02.098540   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:02.098554   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:02.113100   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:02.113113   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:02.130957   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:02.130966   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:02.141923   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:02.141937   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:02.154846   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:02.154857   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:02.166926   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:02.166935   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:02.171529   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:02.171535   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:02.205381   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:02.205394   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:02.219524   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:02.219538   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:02.239817   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:02.239831   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:02.251505   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:02.251514   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:04.776222   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:09.435774   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:09.436135   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:09.474173   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:09.474306   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:09.493979   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:09.494077   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:09.508524   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:09.508594   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:09.520775   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:09.520851   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:09.532011   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:09.532073   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:09.542499   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:09.542568   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:09.553239   21370 logs.go:276] 0 containers: []
	W0520 05:02:09.553247   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:09.553302   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:09.568146   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:09.568163   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:09.568169   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:09.580248   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:09.580258   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:09.592400   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:09.592411   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:09.776489   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:09.776579   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:09.789029   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:09.789100   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:09.799933   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:09.800009   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:09.815981   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:09.816055   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:09.827074   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:09.827150   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:09.837844   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:09.837916   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:09.848630   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:09.848709   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:09.859694   21535 logs.go:276] 0 containers: []
	W0520 05:02:09.859706   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:09.859767   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:09.870307   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:09.870327   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:09.870334   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:09.909078   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:09.909086   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:09.944482   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:09.944494   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:09.956397   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:09.956407   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:09.967516   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:09.967526   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:09.981700   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:09.981710   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:10.019610   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:10.019624   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:10.033790   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:10.033800   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:10.049388   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:10.049401   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:10.064887   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:10.064897   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:09.610774   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:09.610783   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:09.625632   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:09.625645   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:09.640121   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:09.640132   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:09.652214   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:09.652224   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:09.667358   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:09.667369   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:09.705872   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:09.705883   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:09.710429   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:09.710437   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:09.747423   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:09.747437   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:09.759272   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:09.759286   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:09.770728   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:09.770738   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:09.798189   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:09.798205   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:09.816329   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:09.816337   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:12.334025   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:10.079405   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:10.079418   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:10.091017   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:10.091029   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:10.113954   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:10.113964   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:10.126213   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:10.126223   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:10.130516   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:10.130522   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:10.145851   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:10.145860   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:10.161374   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:10.161385   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:12.683572   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:17.336314   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:17.336480   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:17.351432   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:17.351502   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:17.361830   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:17.361892   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:17.372787   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:17.372861   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:17.383839   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:17.383907   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:17.394725   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:17.394796   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:17.405099   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:17.405163   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:17.415232   21370 logs.go:276] 0 containers: []
	W0520 05:02:17.415246   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:17.415302   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:17.425821   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:17.425836   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:17.425841   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:17.441161   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:17.441173   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:17.459857   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:17.459867   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:17.474533   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:17.474548   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:17.485890   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:17.485900   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:17.502849   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:17.502859   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:17.507299   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:17.507306   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:17.518482   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:17.518496   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:17.555762   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:17.555769   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:17.591258   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:17.591269   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:17.605744   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:17.605754   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:17.617501   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:17.617512   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:17.628629   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:17.628639   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:17.647281   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:17.647291   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:17.671783   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:17.671791   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:17.685841   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:17.685931   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:17.696659   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:17.696733   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:17.707047   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:17.707125   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:17.717689   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:17.717749   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:17.727671   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:17.727749   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:17.738178   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:17.738254   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:17.748769   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:17.748830   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:17.759411   21535 logs.go:276] 0 containers: []
	W0520 05:02:17.759422   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:17.759484   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:17.770453   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:17.770471   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:17.770476   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:17.809397   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:17.809408   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:17.824387   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:17.824397   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:17.835907   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:17.835918   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:17.847671   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:17.847679   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:17.884860   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:17.884875   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:17.898893   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:17.898905   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:17.912408   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:17.912418   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:17.929853   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:17.929862   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:17.952828   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:17.952833   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:17.987775   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:17.987784   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:18.001952   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:18.001961   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:18.017322   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:18.017332   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:18.030959   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:18.030968   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:18.041837   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:18.041849   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:18.054070   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:18.054080   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:18.058278   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:18.058284   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:20.185848   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:20.572287   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:25.188037   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:25.188152   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:25.200008   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:25.200091   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:25.210977   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:25.211045   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:25.221600   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:25.221671   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:25.231670   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:25.231738   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:25.242827   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:25.242894   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:25.253571   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:25.253637   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:25.263390   21370 logs.go:276] 0 containers: []
	W0520 05:02:25.263402   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:25.263461   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:25.273747   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:25.273767   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:25.273771   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:25.288747   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:25.288757   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:25.300450   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:25.300460   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:25.305430   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:25.305437   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:25.319017   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:25.319027   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:25.330285   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:25.330296   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:25.342480   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:25.342491   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:25.364261   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:25.364271   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:25.376198   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:25.376207   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:25.412531   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:25.412542   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:25.424561   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:25.424573   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:25.442374   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:25.442385   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:25.465926   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:25.465937   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:25.478500   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:25.478515   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:25.491612   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:25.491623   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:28.031772   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:25.575504   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:25.575592   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:25.587150   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:25.587224   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:25.598039   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:25.598103   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:25.608668   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:25.608740   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:25.622342   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:25.622419   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:25.632285   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:25.632350   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:25.642815   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:25.642888   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:25.652939   21535 logs.go:276] 0 containers: []
	W0520 05:02:25.652956   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:25.653012   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:25.663166   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:25.663185   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:25.663191   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:25.704853   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:25.704864   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:25.747606   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:25.747622   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:25.760520   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:25.760531   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:25.772480   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:25.772492   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:25.784718   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:25.784729   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:25.796704   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:25.796717   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:25.810016   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:25.810026   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:25.814248   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:25.814254   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:25.827919   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:25.827929   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:25.842764   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:25.842774   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:25.859516   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:25.859526   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:25.870667   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:25.870680   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:25.891968   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:25.891976   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:25.928539   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:25.928546   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:25.945300   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:25.945310   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:25.959786   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:25.959798   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:28.473402   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:33.032561   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:33.032861   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:33.062981   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:33.063105   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:33.081363   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:33.081455   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:33.095195   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:33.095272   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:33.106961   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:33.107024   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:33.117206   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:33.117277   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:33.128020   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:33.128087   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:33.137873   21370 logs.go:276] 0 containers: []
	W0520 05:02:33.137884   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:33.137934   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:33.148177   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:33.148192   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:33.148197   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:33.162708   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:33.162719   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:33.177064   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:33.177076   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:33.194260   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:33.194271   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:33.205429   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:33.205442   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:33.245208   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:33.245225   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:33.260139   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:33.260149   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:33.275394   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:33.275405   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:33.287054   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:33.287065   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:33.298408   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:33.298418   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:33.309815   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:33.309827   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:33.334023   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:33.334030   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:33.338264   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:33.338269   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:33.371755   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:33.371765   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:33.383561   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:33.383575   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:33.475605   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:33.475689   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:33.486213   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:33.486287   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:33.496437   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:33.496501   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:33.507872   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:33.507942   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:33.518460   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:33.518522   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:33.528741   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:33.528807   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:33.539491   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:33.539554   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:33.553805   21535 logs.go:276] 0 containers: []
	W0520 05:02:33.553820   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:33.553882   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:33.564014   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:33.564034   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:33.564040   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:33.568714   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:33.568722   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:33.585895   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:33.585905   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:33.601253   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:33.601265   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:33.613883   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:33.613894   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:33.625697   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:33.625708   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:33.640069   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:33.640082   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:33.656267   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:33.656276   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:33.670224   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:33.670235   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:33.707148   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:33.707159   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:33.720972   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:33.720982   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:33.732291   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:33.732301   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:33.755801   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:33.755809   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:33.768759   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:33.768771   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:33.804028   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:33.804042   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:33.816805   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:33.816817   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:33.838467   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:33.838481   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:35.897080   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:36.377964   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:40.899548   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:40.899786   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:40.922613   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:40.922731   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:40.938754   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:40.938842   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:40.951490   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:40.951564   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:40.965045   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:40.965114   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:40.975895   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:40.975958   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:40.986904   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:40.986969   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:40.996859   21370 logs.go:276] 0 containers: []
	W0520 05:02:40.996872   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:40.996952   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:41.007805   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:41.007821   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:41.007826   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:41.012914   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:41.012924   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:41.024759   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:41.024772   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:41.042437   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:41.042449   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:41.054114   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:41.054124   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:41.068019   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:41.068033   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:41.082447   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:41.082461   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:41.106108   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:41.106115   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:41.144773   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:41.144789   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:41.165972   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:41.165982   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:41.177990   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:41.178001   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:41.189702   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:41.189713   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:41.203512   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:41.203522   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:41.224047   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:41.224058   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:41.236748   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:41.236759   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:43.775949   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:41.380258   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:41.380401   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:41.390882   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:41.390947   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:41.401461   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:41.401533   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:41.412038   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:41.412108   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:41.422601   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:41.422673   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:41.433269   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:41.433335   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:41.443591   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:41.443659   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:41.454243   21535 logs.go:276] 0 containers: []
	W0520 05:02:41.454254   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:41.454310   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:41.464421   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:41.464439   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:41.464444   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:41.483515   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:41.483526   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:41.503097   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:41.503106   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:41.515679   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:41.515689   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:41.527283   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:41.527293   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:41.531758   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:41.531765   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:41.565739   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:41.565753   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:41.577748   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:41.577762   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:41.591839   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:41.591850   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:41.608581   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:41.608592   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:41.621905   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:41.621915   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:41.642516   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:41.642527   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:41.658059   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:41.658068   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:41.696804   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:41.696811   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:41.709046   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:41.709057   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:41.731907   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:41.731913   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:41.772059   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:41.772070   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:44.288971   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:48.778230   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:48.778471   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:48.794597   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:48.794686   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:48.806649   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:48.806719   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:48.818242   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:48.818313   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:48.828403   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:48.828473   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:48.839263   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:48.839329   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:48.849825   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:48.849891   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:48.875895   21370 logs.go:276] 0 containers: []
	W0520 05:02:48.875904   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:48.875958   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:48.885908   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:48.885926   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:48.885932   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:48.902955   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:48.902969   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:48.914286   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:48.914300   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:48.925633   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:48.925644   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:48.964735   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:48.964754   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:48.969744   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:48.969751   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:48.983972   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:48.983986   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:48.998144   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:48.998157   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:49.013271   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:49.013285   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:49.034390   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:49.034401   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:49.048163   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:49.048174   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:49.071416   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:49.071426   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:49.105712   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:49.105722   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:49.117514   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:49.117525   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:49.128944   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:49.128954   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:49.291224   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:49.291304   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:49.301794   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:49.301871   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:49.312675   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:49.312746   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:49.322666   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:49.322730   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:49.338869   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:49.338953   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:49.352138   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:49.352215   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:49.364282   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:49.364352   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:49.374328   21535 logs.go:276] 0 containers: []
	W0520 05:02:49.374340   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:49.374403   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:49.384766   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:49.384785   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:49.384791   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:49.402271   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:49.402280   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:49.416772   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:49.416781   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:49.433704   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:49.433713   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:49.447047   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:49.447058   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:49.470451   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:49.470460   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:49.483691   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:49.483704   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:49.521842   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:49.521850   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:49.535317   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:49.535331   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:49.573389   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:49.573403   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:49.584634   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:49.584644   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:49.601612   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:49.601626   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:49.612901   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:49.612913   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:49.617065   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:49.617072   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:49.653456   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:49.653472   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:49.664620   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:49.664630   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:49.676278   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:49.676293   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:51.646959   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:52.190495   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:57.192812   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:57.192856   21535 kubeadm.go:591] duration metric: took 4m4.033322208s to restartPrimaryControlPlane
	W0520 05:02:57.192905   21535 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 05:02:57.192925   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 05:02:58.195025   21535 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002095833s)
	I0520 05:02:58.195095   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:58.200147   21535 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:02:58.203049   21535 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:02:58.206012   21535 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:02:58.206020   21535 kubeadm.go:156] found existing configuration files:
	
	I0520 05:02:58.206044   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf
	I0520 05:02:58.208967   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:02:58.208996   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:02:58.211632   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf
	I0520 05:02:58.214361   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:02:58.214393   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:02:58.217579   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf
	I0520 05:02:58.220617   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:02:58.220641   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:02:58.223171   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf
	I0520 05:02:58.225853   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:02:58.225875   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:02:58.229013   21535 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:02:58.246051   21535 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 05:02:58.246131   21535 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:58.300143   21535 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:58.300194   21535 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:58.300242   21535 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:58.352992   21535 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:58.360998   21535 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:58.361033   21535 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:58.361106   21535 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:58.361166   21535 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 05:02:58.361227   21535 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 05:02:58.361314   21535 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 05:02:58.361352   21535 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 05:02:58.361414   21535 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 05:02:58.361485   21535 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 05:02:58.361532   21535 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 05:02:58.361626   21535 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 05:02:58.361657   21535 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 05:02:58.361726   21535 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:58.416956   21535 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:58.600489   21535 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:58.659640   21535 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:58.696756   21535 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:58.726654   21535 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:58.727196   21535 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:58.727225   21535 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:58.815514   21535 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:56.649447   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:56.649917   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:56.689074   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:02:56.689217   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:56.711628   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:02:56.711732   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:56.726725   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:02:56.726799   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:56.739213   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:02:56.739287   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:56.749984   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:02:56.750050   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:56.761227   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:02:56.761296   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:56.772009   21370 logs.go:276] 0 containers: []
	W0520 05:02:56.772021   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:56.772084   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:56.783989   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:02:56.784006   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:56.784012   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:56.821588   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:02:56.821598   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:02:56.839838   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:02:56.839847   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:02:56.851841   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:02:56.851852   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:02:56.869068   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:56.869079   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:56.873706   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:02:56.873713   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:56.884824   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:02:56.884833   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:02:56.896100   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:56.896113   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:56.920328   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:56.920335   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:56.955074   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:02:56.955087   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:02:56.972029   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:02:56.972041   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:02:56.986378   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:02:56.986390   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:02:56.998098   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:02:56.998110   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:02:57.016285   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:02:57.016296   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:02:57.027469   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:02:57.027479   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:02:59.540753   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:58.819745   21535 out.go:204]   - Booting up control plane ...
	I0520 05:02:58.819792   21535 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:58.819832   21535 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:58.819862   21535 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:58.819912   21535 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:58.822495   21535 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 05:03:03.325590   21535 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502410 seconds
	I0520 05:03:03.325698   21535 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:03:03.331174   21535 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:03:03.838947   21535 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:03:03.839057   21535 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-298000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:03:04.342791   21535 kubeadm.go:309] [bootstrap-token] Using token: vpjlvi.b3xqzdy0rkb3gdrn
	I0520 05:03:04.542932   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:04.543062   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:03:04.554797   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:03:04.554877   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:03:04.566234   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:03:04.566303   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:03:04.578881   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:03:04.578973   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:03:04.590081   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:03:04.590147   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:03:04.346288   21535 out.go:204]   - Configuring RBAC rules ...
	I0520 05:03:04.346355   21535 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:03:04.349297   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:03:04.352221   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:03:04.353084   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:03:04.353913   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:03:04.354706   21535 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:03:04.358069   21535 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:03:04.535098   21535 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:03:04.751507   21535 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:03:04.751960   21535 kubeadm.go:309] 
	I0520 05:03:04.752069   21535 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:03:04.752077   21535 kubeadm.go:309] 
	I0520 05:03:04.752121   21535 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:03:04.752125   21535 kubeadm.go:309] 
	I0520 05:03:04.752142   21535 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:03:04.752243   21535 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:03:04.752284   21535 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:03:04.752306   21535 kubeadm.go:309] 
	I0520 05:03:04.752347   21535 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:03:04.752354   21535 kubeadm.go:309] 
	I0520 05:03:04.752374   21535 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:03:04.752378   21535 kubeadm.go:309] 
	I0520 05:03:04.752415   21535 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:03:04.752452   21535 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:03:04.752498   21535 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:03:04.752505   21535 kubeadm.go:309] 
	I0520 05:03:04.752580   21535 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:03:04.752633   21535 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:03:04.752639   21535 kubeadm.go:309] 
	I0520 05:03:04.752731   21535 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vpjlvi.b3xqzdy0rkb3gdrn \
	I0520 05:03:04.752839   21535 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc \
	I0520 05:03:04.752855   21535 kubeadm.go:309] 	--control-plane 
	I0520 05:03:04.752863   21535 kubeadm.go:309] 
	I0520 05:03:04.752982   21535 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:03:04.752987   21535 kubeadm.go:309] 
	I0520 05:03:04.753023   21535 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vpjlvi.b3xqzdy0rkb3gdrn \
	I0520 05:03:04.753082   21535 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc 
	I0520 05:03:04.753133   21535 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:03:04.753138   21535 cni.go:84] Creating CNI manager for ""
	I0520 05:03:04.753145   21535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:03:04.755865   21535 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 05:03:04.765869   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 05:03:04.769463   21535 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 05:03:04.775023   21535 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:03:04.775087   21535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:03:04.775148   21535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-298000 minikube.k8s.io/updated_at=2024_05_20T05_03_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=stopped-upgrade-298000 minikube.k8s.io/primary=true
	I0520 05:03:04.823728   21535 ops.go:34] apiserver oom_adj: -16
	I0520 05:03:04.823728   21535 kubeadm.go:1107] duration metric: took 48.702209ms to wait for elevateKubeSystemPrivileges
	W0520 05:03:04.823752   21535 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:03:04.823756   21535 kubeadm.go:393] duration metric: took 4m11.677306459s to StartCluster
	I0520 05:03:04.823767   21535 settings.go:142] acquiring lock: {Name:mkb0015ab6abb1526406adb43e2b3d4392387c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:03:04.823859   21535 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:03:04.824274   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/kubeconfig: {Name:mk3ada957134ebfd6ba10dc19bcfe4b23657e56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:03:04.824488   21535 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:03:04.826274   21535 out.go:177] * Verifying Kubernetes components...
	I0520 05:03:04.824531   21535 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:03:04.824585   21535 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:03:04.835695   21535 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-298000"
	I0520 05:03:04.835707   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:03:04.835730   21535 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-298000"
	I0520 05:03:04.835702   21535 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-298000"
	W0520 05:03:04.835740   21535 addons.go:243] addon storage-provisioner should already be in state true
	I0520 05:03:04.835756   21535 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-298000"
	I0520 05:03:04.835773   21535 host.go:66] Checking if "stopped-upgrade-298000" exists ...
	I0520 05:03:04.840814   21535 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:03:04.844892   21535 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:03:04.844902   21535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:03:04.844911   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 05:03:04.846045   21535 kapi.go:59] client config for stopped-upgrade-298000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.key", CAFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10586c580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:03:04.846169   21535 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-298000"
	W0520 05:03:04.846176   21535 addons.go:243] addon default-storageclass should already be in state true
	I0520 05:03:04.846188   21535 host.go:66] Checking if "stopped-upgrade-298000" exists ...
	I0520 05:03:04.847187   21535 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:03:04.847193   21535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:03:04.847200   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 05:03:04.927874   21535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:03:04.932796   21535 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:03:04.932843   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:03:04.936601   21535 api_server.go:72] duration metric: took 112.100791ms to wait for apiserver process to appear ...
	I0520 05:03:04.936609   21535 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:03:04.936616   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:04.959286   21535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:03:04.965152   21535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:03:04.601322   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:03:04.601394   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:03:04.613019   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:03:04.613093   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:03:04.624484   21370 logs.go:276] 0 containers: []
	W0520 05:03:04.624496   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:03:04.624555   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:03:04.636034   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:03:04.636054   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:03:04.636060   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:03:04.649652   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:03:04.649663   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:03:04.689440   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:03:04.689465   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:03:04.703715   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:03:04.703725   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:03:04.717107   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:03:04.717118   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:03:04.729337   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:03:04.729348   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:03:04.749515   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:03:04.749531   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:03:04.754771   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:03:04.754779   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:03:04.798548   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:03:04.798559   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:03:04.811325   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:03:04.811337   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:03:04.824378   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:03:04.824388   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:03:04.836645   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:03:04.836653   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:03:04.865817   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:03:04.865831   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:03:04.878124   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:03:04.878136   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:03:04.893361   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:03:04.893371   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:03:07.422058   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:09.938674   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:09.938727   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:12.424273   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:12.424455   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:03:12.440954   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:03:12.441033   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:03:12.453425   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:03:12.453491   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:03:12.464087   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:03:12.464159   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:03:12.474278   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:03:12.474344   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:03:12.484887   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:03:12.484945   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:03:12.494755   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:03:12.494821   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:03:12.506353   21370 logs.go:276] 0 containers: []
	W0520 05:03:12.506366   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:03:12.506433   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:03:12.521519   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:03:12.521537   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:03:12.521542   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:03:12.533259   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:03:12.533274   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:03:12.544940   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:03:12.544951   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:03:12.579812   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:03:12.579823   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:03:12.596153   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:03:12.596162   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:03:12.607903   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:03:12.607916   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:03:12.619585   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:03:12.619596   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:03:12.633926   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:03:12.633936   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:03:12.648566   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:03:12.648577   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:03:12.660892   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:03:12.660903   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:03:12.674205   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:03:12.674219   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:03:12.689341   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:03:12.689351   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:03:12.706937   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:03:12.706951   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:03:12.730230   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:03:12.730238   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:03:12.767947   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:03:12.767964   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:03:14.938923   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:14.938945   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:15.274993   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:19.939228   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:19.939265   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:20.277282   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:20.277495   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:03:20.301953   21370 logs.go:276] 1 containers: [0b425496d79d]
	I0520 05:03:20.302074   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:03:20.318665   21370 logs.go:276] 1 containers: [6ec8e90f3762]
	I0520 05:03:20.318745   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:03:20.331240   21370 logs.go:276] 4 containers: [56d4854231b6 95ad72ffea28 4ee2c0dfaeaf 6b496c84088a]
	I0520 05:03:20.331325   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:03:20.341932   21370 logs.go:276] 1 containers: [333a015b3a39]
	I0520 05:03:20.341999   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:03:20.353262   21370 logs.go:276] 1 containers: [b3ad100e7c80]
	I0520 05:03:20.353331   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:03:20.364045   21370 logs.go:276] 1 containers: [a1b1cc1fdf9b]
	I0520 05:03:20.364118   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:03:20.374123   21370 logs.go:276] 0 containers: []
	W0520 05:03:20.374134   21370 logs.go:278] No container was found matching "kindnet"
	I0520 05:03:20.374191   21370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:03:20.384882   21370 logs.go:276] 1 containers: [ffea2e6e531d]
	I0520 05:03:20.384901   21370 logs.go:123] Gathering logs for coredns [56d4854231b6] ...
	I0520 05:03:20.384906   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56d4854231b6"
	I0520 05:03:20.402988   21370 logs.go:123] Gathering logs for coredns [4ee2c0dfaeaf] ...
	I0520 05:03:20.403001   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ee2c0dfaeaf"
	I0520 05:03:20.415495   21370 logs.go:123] Gathering logs for coredns [6b496c84088a] ...
	I0520 05:03:20.415507   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b496c84088a"
	I0520 05:03:20.435864   21370 logs.go:123] Gathering logs for container status ...
	I0520 05:03:20.435875   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:03:20.448026   21370 logs.go:123] Gathering logs for dmesg ...
	I0520 05:03:20.448037   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:03:20.452334   21370 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:03:20.452340   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:03:20.486772   21370 logs.go:123] Gathering logs for kube-apiserver [0b425496d79d] ...
	I0520 05:03:20.486787   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b425496d79d"
	I0520 05:03:20.501595   21370 logs.go:123] Gathering logs for storage-provisioner [ffea2e6e531d] ...
	I0520 05:03:20.501604   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea2e6e531d"
	I0520 05:03:20.513494   21370 logs.go:123] Gathering logs for coredns [95ad72ffea28] ...
	I0520 05:03:20.513504   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95ad72ffea28"
	I0520 05:03:20.525566   21370 logs.go:123] Gathering logs for kube-controller-manager [a1b1cc1fdf9b] ...
	I0520 05:03:20.525578   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1b1cc1fdf9b"
	I0520 05:03:20.543427   21370 logs.go:123] Gathering logs for kubelet ...
	I0520 05:03:20.543439   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:03:20.581896   21370 logs.go:123] Gathering logs for etcd [6ec8e90f3762] ...
	I0520 05:03:20.581910   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ec8e90f3762"
	I0520 05:03:20.600804   21370 logs.go:123] Gathering logs for kube-scheduler [333a015b3a39] ...
	I0520 05:03:20.600818   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 333a015b3a39"
	I0520 05:03:20.620570   21370 logs.go:123] Gathering logs for kube-proxy [b3ad100e7c80] ...
	I0520 05:03:20.620584   21370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3ad100e7c80"
	I0520 05:03:20.636983   21370 logs.go:123] Gathering logs for Docker ...
	I0520 05:03:20.636994   21370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:03:23.163110   21370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:24.939709   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:24.939791   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:28.165298   21370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:28.171080   21370 out.go:177] 
	W0520 05:03:28.174041   21370 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 05:03:28.174059   21370 out.go:239] * 
	W0520 05:03:28.175262   21370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:03:28.188976   21370 out.go:177] 
	I0520 05:03:29.940358   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:29.940406   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:34.941099   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:34.941134   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 05:03:35.350875   21535 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 05:03:35.354021   21535 out.go:177] * Enabled addons: storage-provisioner
	I0520 05:03:35.365912   21535 addons.go:505] duration metric: took 30.54160525s for enable addons: enabled=[storage-provisioner]
	I0520 05:03:39.942085   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:39.942144   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-05-20 11:54:36 UTC, ends at Mon 2024-05-20 12:03:44 UTC. --
	May 20 12:03:28 running-upgrade-158000 dockerd[3181]: time="2024-05-20T12:03:28.761736329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 12:03:28 running-upgrade-158000 dockerd[3181]: time="2024-05-20T12:03:28.761903495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 12:03:28 running-upgrade-158000 dockerd[3181]: time="2024-05-20T12:03:28.761930369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 12:03:28 running-upgrade-158000 dockerd[3181]: time="2024-05-20T12:03:28.762020785Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/38c08cdacf73029c9932b715fc5c86958b0478872e510cb7e2bec92db4035817 pid=18825 runtime=io.containerd.runc.v2
	May 20 12:03:29 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:29Z" level=error msg="ContainerStats resp: {0x400043fe00 linux}"
	May 20 12:03:29 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:29Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 12:03:30 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:30Z" level=error msg="ContainerStats resp: {0x400092ef40 linux}"
	May 20 12:03:30 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:30Z" level=error msg="ContainerStats resp: {0x400092f080 linux}"
	May 20 12:03:30 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:30Z" level=error msg="ContainerStats resp: {0x400092f4c0 linux}"
	May 20 12:03:30 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:30Z" level=error msg="ContainerStats resp: {0x400092fc80 linux}"
	May 20 12:03:30 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:30Z" level=error msg="ContainerStats resp: {0x4000919f00 linux}"
	May 20 12:03:30 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:30Z" level=error msg="ContainerStats resp: {0x400092fdc0 linux}"
	May 20 12:03:30 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:30Z" level=error msg="ContainerStats resp: {0x40006b6c40 linux}"
	May 20 12:03:34 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 12:03:39 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 12:03:40 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:40Z" level=error msg="ContainerStats resp: {0x4000919080 linux}"
	May 20 12:03:40 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:40Z" level=error msg="ContainerStats resp: {0x40003530c0 linux}"
	May 20 12:03:41 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:41Z" level=error msg="ContainerStats resp: {0x40006b6a00 linux}"
	May 20 12:03:42 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:42Z" level=error msg="ContainerStats resp: {0x40008bcf80 linux}"
	May 20 12:03:42 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:42Z" level=error msg="ContainerStats resp: {0x40006b76c0 linux}"
	May 20 12:03:42 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:42Z" level=error msg="ContainerStats resp: {0x40008bdcc0 linux}"
	May 20 12:03:42 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:42Z" level=error msg="ContainerStats resp: {0x40006b7e00 linux}"
	May 20 12:03:42 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:42Z" level=error msg="ContainerStats resp: {0x400041c6c0 linux}"
	May 20 12:03:42 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:42Z" level=error msg="ContainerStats resp: {0x4000363dc0 linux}"
	May 20 12:03:42 running-upgrade-158000 cri-dockerd[3024]: time="2024-05-20T12:03:42Z" level=error msg="ContainerStats resp: {0x4000206380 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	38c08cdacf730       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   7d8f7da40db5b
	a04e654fae201       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   73a855ce21529
	56d4854231b6c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   73a855ce21529
	95ad72ffea288       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   7d8f7da40db5b
	b3ad100e7c806       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   9228f6e78496a
	ffea2e6e531d6       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   7ff63aa7bc871
	a1b1cc1fdf9ba       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   fc07d4f75944e
	6ec8e90f3762d       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   4c585b2683a1d
	333a015b3a392       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   6c4c64cd77f81
	0b425496d79d8       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   772a07055e329
	
	
	==> coredns [38c08cdacf73] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6028605058391447734.7045167679861334047. HINFO: read udp 10.244.0.3:34066->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6028605058391447734.7045167679861334047. HINFO: read udp 10.244.0.3:60138->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6028605058391447734.7045167679861334047. HINFO: read udp 10.244.0.3:50639->10.0.2.3:53: i/o timeout
	
	
	==> coredns [56d4854231b6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:42398->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:60334->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:45158->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:39885->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:59106->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:44178->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:56776->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:57526->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:37744->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1855428328052375708.4941056338845227800. HINFO: read udp 10.244.0.2:42801->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [95ad72ffea28] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:51688->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:59451->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:39439->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:40534->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:47043->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:56526->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:45506->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:34963->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:44531->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1027612926617745013.3837732368954054539. HINFO: read udp 10.244.0.3:53155->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a04e654fae20] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6133805448940786883.571914956339377626. HINFO: read udp 10.244.0.2:56720->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6133805448940786883.571914956339377626. HINFO: read udp 10.244.0.2:46114->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6133805448940786883.571914956339377626. HINFO: read udp 10.244.0.2:59426->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-158000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-158000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45
	                    minikube.k8s.io/name=running-upgrade-158000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T04_59_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-158000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:03:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:59:27 +0000   Mon, 20 May 2024 11:59:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:59:27 +0000   Mon, 20 May 2024 11:59:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:59:27 +0000   Mon, 20 May 2024 11:59:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:59:27 +0000   Mon, 20 May 2024 11:59:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-158000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c4bb58dc6944569b26da57d3e6b6f1d
	  System UUID:                7c4bb58dc6944569b26da57d3e6b6f1d
	  Boot ID:                    afc5751f-f15c-44d4-b35b-2ae987473284
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9hhgw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-wtqr8                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-158000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-158000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-158000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-g49x9                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-158000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-158000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-158000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-158000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-158000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-158000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-158000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-158000 status is now: NodeReady
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-158000 event: Registered Node running-upgrade-158000 in Controller
	
	
	==> dmesg <==
	[  +1.735347] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.099142] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.080793] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.144012] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.097112] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.080794] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.375606] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[May20 11:55] systemd-fstab-generator[1924]: Ignoring "noauto" for root device
	[  +2.390308] systemd-fstab-generator[2201]: Ignoring "noauto" for root device
	[  +0.139797] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.106588] systemd-fstab-generator[2249]: Ignoring "noauto" for root device
	[  +0.102874] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[  +2.757605] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.217124] systemd-fstab-generator[2979]: Ignoring "noauto" for root device
	[  +0.080452] systemd-fstab-generator[2992]: Ignoring "noauto" for root device
	[  +0.081621] systemd-fstab-generator[3003]: Ignoring "noauto" for root device
	[  +0.087050] systemd-fstab-generator[3017]: Ignoring "noauto" for root device
	[  +2.176743] systemd-fstab-generator[3167]: Ignoring "noauto" for root device
	[  +3.776069] systemd-fstab-generator[3556]: Ignoring "noauto" for root device
	[  +1.398292] systemd-fstab-generator[3854]: Ignoring "noauto" for root device
	[ +19.266431] kauditd_printk_skb: 68 callbacks suppressed
	[May20 11:59] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.450661] systemd-fstab-generator[11879]: Ignoring "noauto" for root device
	[  +5.635143] systemd-fstab-generator[12472]: Ignoring "noauto" for root device
	[  +0.463543] systemd-fstab-generator[12605]: Ignoring "noauto" for root device
	
	
	==> etcd [6ec8e90f3762] <==
	{"level":"info","ts":"2024-05-20T11:59:22.748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-05-20T11:59:22.748Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-05-20T11:59:22.749Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:59:22.749Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-20T11:59:22.749Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-20T11:59:22.749Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:59:22.749Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:59:22.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T11:59:22.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T11:59:22.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-05-20T11:59:22.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:59:22.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-20T11:59:22.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-05-20T11:59:22.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-20T11:59:22.800Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-158000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:59:22.800Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:59:22.801Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-05-20T11:59:22.801Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:59:22.801Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:59:22.801Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:59:22.801Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:59:22.801Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:59:22.823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:59:22.823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:59:22.823Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 12:03:44 up 9 min,  0 users,  load average: 0.81, 0.35, 0.17
	Linux running-upgrade-158000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [0b425496d79d] <==
	I0520 11:59:24.258172       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 11:59:24.259360       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 11:59:24.259365       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0520 11:59:24.259372       1 cache.go:39] Caches are synced for autoregister controller
	I0520 11:59:24.259529       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0520 11:59:24.261930       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0520 11:59:24.275290       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0520 11:59:24.997857       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 11:59:25.166597       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 11:59:25.171782       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 11:59:25.171807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 11:59:25.305514       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 11:59:25.317896       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 11:59:25.441674       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0520 11:59:25.446926       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0520 11:59:25.447368       1 controller.go:611] quota admission added evaluator for: endpoints
	I0520 11:59:25.448734       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 11:59:26.294027       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0520 11:59:26.930011       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0520 11:59:26.933707       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0520 11:59:26.940696       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0520 11:59:26.989948       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 11:59:40.561971       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0520 11:59:40.562355       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0520 11:59:41.400577       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [a1b1cc1fdf9b] <==
	I0520 11:59:40.544331       1 shared_informer.go:262] Caches are synced for crt configmap
	I0520 11:59:40.546081       1 shared_informer.go:262] Caches are synced for daemon sets
	I0520 11:59:40.546098       1 shared_informer.go:262] Caches are synced for deployment
	I0520 11:59:40.547041       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0520 11:59:40.550546       1 shared_informer.go:262] Caches are synced for namespace
	I0520 11:59:40.563755       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0520 11:59:40.566241       1 shared_informer.go:262] Caches are synced for node
	I0520 11:59:40.566347       1 range_allocator.go:173] Starting range CIDR allocator
	I0520 11:59:40.566372       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0520 11:59:40.566389       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0520 11:59:40.569590       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-g49x9"
	I0520 11:59:40.573781       1 range_allocator.go:374] Set node running-upgrade-158000 PodCIDR to [10.244.0.0/24]
	I0520 11:59:40.585826       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wtqr8"
	I0520 11:59:40.587632       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9hhgw"
	I0520 11:59:40.594202       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0520 11:59:40.644896       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0520 11:59:40.694803       1 shared_informer.go:262] Caches are synced for persistent volume
	I0520 11:59:40.744404       1 shared_informer.go:262] Caches are synced for disruption
	I0520 11:59:40.744440       1 disruption.go:371] Sending events to api server.
	I0520 11:59:40.766248       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 11:59:40.796917       1 shared_informer.go:262] Caches are synced for stateful set
	I0520 11:59:40.801237       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 11:59:41.187193       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 11:59:41.194341       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 11:59:41.194348       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [b3ad100e7c80] <==
	I0520 11:59:41.389338       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0520 11:59:41.389374       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0520 11:59:41.389383       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0520 11:59:41.398511       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0520 11:59:41.398522       1 server_others.go:206] "Using iptables Proxier"
	I0520 11:59:41.398580       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0520 11:59:41.398707       1 server.go:661] "Version info" version="v1.24.1"
	I0520 11:59:41.398715       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:59:41.399044       1 config.go:317] "Starting service config controller"
	I0520 11:59:41.399096       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0520 11:59:41.399109       1 config.go:226] "Starting endpoint slice config controller"
	I0520 11:59:41.399205       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0520 11:59:41.399467       1 config.go:444] "Starting node config controller"
	I0520 11:59:41.399504       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0520 11:59:41.499764       1 shared_informer.go:262] Caches are synced for node config
	I0520 11:59:41.499803       1 shared_informer.go:262] Caches are synced for service config
	I0520 11:59:41.499837       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [333a015b3a39] <==
	W0520 11:59:24.214428       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:59:24.214434       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 11:59:24.214482       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:59:24.214491       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 11:59:24.214510       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 11:59:24.214516       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 11:59:24.214527       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:59:24.214531       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 11:59:24.216445       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:59:24.216464       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:59:24.216548       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:59:24.216557       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 11:59:24.216580       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:59:24.216588       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:59:24.216613       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:59:24.216619       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:59:24.216631       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 11:59:24.216634       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 11:59:25.020646       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:59:25.020742       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:59:25.142208       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:59:25.142527       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 11:59:25.150938       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:59:25.150976       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 11:59:28.311024       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-05-20 11:54:36 UTC, ends at Mon 2024-05-20 12:03:44 UTC. --
	May 20 11:59:27 running-upgrade-158000 kubelet[12478]: I0520 11:59:27.960917   12478 apiserver.go:52] "Watching apiserver"
	May 20 11:59:28 running-upgrade-158000 kubelet[12478]: I0520 11:59:28.191409   12478 reconciler.go:157] "Reconciler: start to sync state"
	May 20 11:59:28 running-upgrade-158000 kubelet[12478]: E0520 11:59:28.569069   12478 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-158000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-158000"
	May 20 11:59:28 running-upgrade-158000 kubelet[12478]: E0520 11:59:28.763982   12478 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-158000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-158000"
	May 20 11:59:28 running-upgrade-158000 kubelet[12478]: E0520 11:59:28.963481   12478 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-158000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-158000"
	May 20 11:59:29 running-upgrade-158000 kubelet[12478]: I0520 11:59:29.161153   12478 request.go:601] Waited for 1.145013192s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 20 11:59:29 running-upgrade-158000 kubelet[12478]: E0520 11:59:29.163940   12478 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-158000\" already exists" pod="kube-system/etcd-running-upgrade-158000"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.508567   12478 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.571432   12478 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.588254   12478 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.588227   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80b374f0-25ad-4100-b87b-e96cfc89a2a0-kube-proxy\") pod \"kube-proxy-g49x9\" (UID: \"80b374f0-25ad-4100-b87b-e96cfc89a2a0\") " pod="kube-system/kube-proxy-g49x9"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.588559   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwp5\" (UniqueName: \"kubernetes.io/projected/9a1775ed-d428-4d61-ad2a-a61965c27a14-kube-api-access-4bwp5\") pod \"storage-provisioner\" (UID: \"9a1775ed-d428-4d61-ad2a-a61965c27a14\") " pod="kube-system/storage-provisioner"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.588571   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80b374f0-25ad-4100-b87b-e96cfc89a2a0-lib-modules\") pod \"kube-proxy-g49x9\" (UID: \"80b374f0-25ad-4100-b87b-e96cfc89a2a0\") " pod="kube-system/kube-proxy-g49x9"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.588581   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqj6d\" (UniqueName: \"kubernetes.io/projected/80b374f0-25ad-4100-b87b-e96cfc89a2a0-kube-api-access-xqj6d\") pod \"kube-proxy-g49x9\" (UID: \"80b374f0-25ad-4100-b87b-e96cfc89a2a0\") " pod="kube-system/kube-proxy-g49x9"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.588676   12478 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.590016   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9a1775ed-d428-4d61-ad2a-a61965c27a14-tmp\") pod \"storage-provisioner\" (UID: \"9a1775ed-d428-4d61-ad2a-a61965c27a14\") " pod="kube-system/storage-provisioner"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.590041   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80b374f0-25ad-4100-b87b-e96cfc89a2a0-xtables-lock\") pod \"kube-proxy-g49x9\" (UID: \"80b374f0-25ad-4100-b87b-e96cfc89a2a0\") " pod="kube-system/kube-proxy-g49x9"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.591898   12478 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.595652   12478 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.690149   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549xt\" (UniqueName: \"kubernetes.io/projected/16d0906e-6c5e-4edc-a1b3-e9493f1e541d-kube-api-access-549xt\") pod \"coredns-6d4b75cb6d-wtqr8\" (UID: \"16d0906e-6c5e-4edc-a1b3-e9493f1e541d\") " pod="kube-system/coredns-6d4b75cb6d-wtqr8"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.690206   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16d0906e-6c5e-4edc-a1b3-e9493f1e541d-config-volume\") pod \"coredns-6d4b75cb6d-wtqr8\" (UID: \"16d0906e-6c5e-4edc-a1b3-e9493f1e541d\") " pod="kube-system/coredns-6d4b75cb6d-wtqr8"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.690242   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e5f1329-240d-4582-8787-0eaf5c8a2a7e-config-volume\") pod \"coredns-6d4b75cb6d-9hhgw\" (UID: \"3e5f1329-240d-4582-8787-0eaf5c8a2a7e\") " pod="kube-system/coredns-6d4b75cb6d-9hhgw"
	May 20 11:59:40 running-upgrade-158000 kubelet[12478]: I0520 11:59:40.690254   12478 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrjl\" (UniqueName: \"kubernetes.io/projected/3e5f1329-240d-4582-8787-0eaf5c8a2a7e-kube-api-access-twrjl\") pod \"coredns-6d4b75cb6d-9hhgw\" (UID: \"3e5f1329-240d-4582-8787-0eaf5c8a2a7e\") " pod="kube-system/coredns-6d4b75cb6d-9hhgw"
	May 20 12:03:29 running-upgrade-158000 kubelet[12478]: I0520 12:03:29.646038   12478 scope.go:110] "RemoveContainer" containerID="4ee2c0dfaeaf765234c19af80966bd9e0a893fe9effff3de2a9bc842faa26522"
	May 20 12:03:29 running-upgrade-158000 kubelet[12478]: I0520 12:03:29.662170   12478 scope.go:110] "RemoveContainer" containerID="6b496c84088a27719eee7e0f6aab1b7765e07d94f6f418bdfa8b7d7352e732f4"
	
	
	==> storage-provisioner [ffea2e6e531d] <==
	I0520 11:59:41.312132       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:59:41.317962       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:59:41.318032       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:59:41.321657       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:59:41.321833       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ccdb2cc-1e4e-45ef-ac67-aaf3d03e25d2", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-158000_29934e79-14cb-4e87-baca-48969e135d36 became leader
	I0520 11:59:41.321868       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-158000_29934e79-14cb-4e87-baca-48969e135d36!
	I0520 11:59:41.422974       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-158000_29934e79-14cb-4e87-baca-48969e135d36!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-158000 -n running-upgrade-158000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-158000 -n running-upgrade-158000: exit status 2 (15.648844083s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-158000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-158000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-158000
--- FAIL: TestRunningBinaryUpgrade (587.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-839000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-839000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.142090208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-839000" primary control-plane node in "kubernetes-upgrade-839000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-839000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:57:12.914673   21461 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:57:12.914828   21461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:57:12.914831   21461 out.go:304] Setting ErrFile to fd 2...
	I0520 04:57:12.914833   21461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:57:12.914972   21461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:57:12.916253   21461 out.go:298] Setting JSON to false
	I0520 04:57:12.934245   21461 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10603,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:57:12.934320   21461 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:57:12.938854   21461 out.go:177] * [kubernetes-upgrade-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:57:12.946869   21461 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:57:12.950826   21461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:57:12.946907   21461 notify.go:220] Checking for updates...
	I0520 04:57:12.954787   21461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:57:12.957830   21461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:57:12.960805   21461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:57:12.963816   21461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:57:12.967223   21461 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:57:12.967288   21461 config.go:182] Loaded profile config "running-upgrade-158000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:57:12.967334   21461 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:57:12.970817   21461 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:57:12.977783   21461 start.go:297] selected driver: qemu2
	I0520 04:57:12.977792   21461 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:57:12.977798   21461 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:57:12.980217   21461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:57:12.982788   21461 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:57:12.985888   21461 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:57:12.985903   21461 cni.go:84] Creating CNI manager for ""
	I0520 04:57:12.985908   21461 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:57:12.985935   21461 start.go:340] cluster config:
	{Name:kubernetes-upgrade-839000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:57:12.990752   21461 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:57:12.997645   21461 out.go:177] * Starting "kubernetes-upgrade-839000" primary control-plane node in "kubernetes-upgrade-839000" cluster
	I0520 04:57:13.001803   21461 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:57:13.001824   21461 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:57:13.001835   21461 cache.go:56] Caching tarball of preloaded images
	I0520 04:57:13.001901   21461 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:57:13.001906   21461 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:57:13.001961   21461 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/kubernetes-upgrade-839000/config.json ...
	I0520 04:57:13.001971   21461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/kubernetes-upgrade-839000/config.json: {Name:mkd90bc780e8f2807762d7b33ed8cee3f4ffac48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:57:13.002206   21461 start.go:360] acquireMachinesLock for kubernetes-upgrade-839000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:57:13.002239   21461 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "kubernetes-upgrade-839000"
	I0520 04:57:13.002250   21461 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:57:13.002281   21461 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:57:13.005779   21461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:57:13.031643   21461 start.go:159] libmachine.API.Create for "kubernetes-upgrade-839000" (driver="qemu2")
	I0520 04:57:13.031674   21461 client.go:168] LocalClient.Create starting
	I0520 04:57:13.031748   21461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:57:13.031779   21461 main.go:141] libmachine: Decoding PEM data...
	I0520 04:57:13.031791   21461 main.go:141] libmachine: Parsing certificate...
	I0520 04:57:13.031840   21461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:57:13.031863   21461 main.go:141] libmachine: Decoding PEM data...
	I0520 04:57:13.031868   21461 main.go:141] libmachine: Parsing certificate...
	I0520 04:57:13.032226   21461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:57:13.186718   21461 main.go:141] libmachine: Creating SSH key...
	I0520 04:57:13.557867   21461 main.go:141] libmachine: Creating Disk image...
	I0520 04:57:13.557881   21461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:57:13.558108   21461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:13.573050   21461 main.go:141] libmachine: STDOUT: 
	I0520 04:57:13.573074   21461 main.go:141] libmachine: STDERR: 
	I0520 04:57:13.573138   21461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2 +20000M
	I0520 04:57:13.584565   21461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:57:13.584584   21461 main.go:141] libmachine: STDERR: 
	I0520 04:57:13.584603   21461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:13.584610   21461 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:57:13.584646   21461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:30:00:4b:bd:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:13.586364   21461 main.go:141] libmachine: STDOUT: 
	I0520 04:57:13.586378   21461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:57:13.586398   21461 client.go:171] duration metric: took 554.722917ms to LocalClient.Create
	I0520 04:57:15.588655   21461 start.go:128] duration metric: took 2.586367417s to createHost
	I0520 04:57:15.588721   21461 start.go:83] releasing machines lock for "kubernetes-upgrade-839000", held for 2.586491667s
	W0520 04:57:15.588787   21461 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:57:15.602371   21461 out.go:177] * Deleting "kubernetes-upgrade-839000" in qemu2 ...
	W0520 04:57:15.627682   21461 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:57:15.627715   21461 start.go:728] Will try again in 5 seconds ...
	I0520 04:57:20.629872   21461 start.go:360] acquireMachinesLock for kubernetes-upgrade-839000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:57:20.630382   21461 start.go:364] duration metric: took 367.541µs to acquireMachinesLock for "kubernetes-upgrade-839000"
	I0520 04:57:20.630621   21461 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:57:20.630911   21461 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:57:20.640669   21461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:57:20.692156   21461 start.go:159] libmachine.API.Create for "kubernetes-upgrade-839000" (driver="qemu2")
	I0520 04:57:20.692195   21461 client.go:168] LocalClient.Create starting
	I0520 04:57:20.692322   21461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 04:57:20.692395   21461 main.go:141] libmachine: Decoding PEM data...
	I0520 04:57:20.692415   21461 main.go:141] libmachine: Parsing certificate...
	I0520 04:57:20.692472   21461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 04:57:20.692523   21461 main.go:141] libmachine: Decoding PEM data...
	I0520 04:57:20.692536   21461 main.go:141] libmachine: Parsing certificate...
	I0520 04:57:20.693068   21461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:57:20.839400   21461 main.go:141] libmachine: Creating SSH key...
	I0520 04:57:20.955764   21461 main.go:141] libmachine: Creating Disk image...
	I0520 04:57:20.955775   21461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:57:20.956020   21461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:20.970695   21461 main.go:141] libmachine: STDOUT: 
	I0520 04:57:20.970720   21461 main.go:141] libmachine: STDERR: 
	I0520 04:57:20.970814   21461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2 +20000M
	I0520 04:57:20.983081   21461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:57:20.983103   21461 main.go:141] libmachine: STDERR: 
	I0520 04:57:20.983124   21461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:20.983128   21461 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:57:20.983158   21461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:89:91:5b:65:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:20.985065   21461 main.go:141] libmachine: STDOUT: 
	I0520 04:57:20.985079   21461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:57:20.985094   21461 client.go:171] duration metric: took 292.896375ms to LocalClient.Create
	I0520 04:57:22.987379   21461 start.go:128] duration metric: took 2.356418084s to createHost
	I0520 04:57:22.987470   21461 start.go:83] releasing machines lock for "kubernetes-upgrade-839000", held for 2.357014584s
	W0520 04:57:22.987910   21461 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-839000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-839000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:57:22.996653   21461 out.go:177] 
	W0520 04:57:23.001690   21461 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:57:23.001732   21461 out.go:239] * 
	* 
	W0520 04:57:23.004199   21461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:57:23.013599   21461 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-839000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-839000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-839000: (2.101910666s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-839000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-839000 status --format={{.Host}}: exit status 7 (58.105333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-839000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-839000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182035125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-839000" primary control-plane node in "kubernetes-upgrade-839000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-839000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:57:25.219739   21490 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:57:25.219883   21490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:57:25.219886   21490 out.go:304] Setting ErrFile to fd 2...
	I0520 04:57:25.219888   21490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:57:25.220013   21490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:57:25.221015   21490 out.go:298] Setting JSON to false
	I0520 04:57:25.238236   21490 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10616,"bootTime":1716195629,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:57:25.238295   21490 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:57:25.243430   21490 out.go:177] * [kubernetes-upgrade-839000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:57:25.251340   21490 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:57:25.251404   21490 notify.go:220] Checking for updates...
	I0520 04:57:25.257273   21490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:57:25.260306   21490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:57:25.263339   21490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:57:25.266291   21490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:57:25.269297   21490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:57:25.272598   21490 config.go:182] Loaded profile config "kubernetes-upgrade-839000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 04:57:25.272867   21490 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:57:25.277234   21490 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:57:25.284355   21490 start.go:297] selected driver: qemu2
	I0520 04:57:25.284363   21490 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:57:25.284443   21490 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:57:25.286971   21490 cni.go:84] Creating CNI manager for ""
	I0520 04:57:25.286989   21490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:57:25.287019   21490 start.go:340] cluster config:
	{Name:kubernetes-upgrade-839000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-839000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:57:25.291199   21490 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:57:25.298294   21490 out.go:177] * Starting "kubernetes-upgrade-839000" primary control-plane node in "kubernetes-upgrade-839000" cluster
	I0520 04:57:25.302152   21490 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:57:25.302166   21490 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:57:25.302179   21490 cache.go:56] Caching tarball of preloaded images
	I0520 04:57:25.302233   21490 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:57:25.302239   21490 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:57:25.302298   21490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/kubernetes-upgrade-839000/config.json ...
	I0520 04:57:25.302749   21490 start.go:360] acquireMachinesLock for kubernetes-upgrade-839000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:57:25.302780   21490 start.go:364] duration metric: took 22.458µs to acquireMachinesLock for "kubernetes-upgrade-839000"
	I0520 04:57:25.302790   21490 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:57:25.302796   21490 fix.go:54] fixHost starting: 
	I0520 04:57:25.302909   21490 fix.go:112] recreateIfNeeded on kubernetes-upgrade-839000: state=Stopped err=<nil>
	W0520 04:57:25.302917   21490 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:57:25.311316   21490 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-839000" ...
	I0520 04:57:25.315335   21490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:89:91:5b:65:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:25.317374   21490 main.go:141] libmachine: STDOUT: 
	I0520 04:57:25.317395   21490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:57:25.317426   21490 fix.go:56] duration metric: took 14.630459ms for fixHost
	I0520 04:57:25.317430   21490 start.go:83] releasing machines lock for "kubernetes-upgrade-839000", held for 14.646083ms
	W0520 04:57:25.317436   21490 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:57:25.317469   21490 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:57:25.317473   21490 start.go:728] Will try again in 5 seconds ...
	I0520 04:57:30.319677   21490 start.go:360] acquireMachinesLock for kubernetes-upgrade-839000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:57:30.320159   21490 start.go:364] duration metric: took 373.458µs to acquireMachinesLock for "kubernetes-upgrade-839000"
	I0520 04:57:30.320346   21490 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:57:30.320366   21490 fix.go:54] fixHost starting: 
	I0520 04:57:30.321024   21490 fix.go:112] recreateIfNeeded on kubernetes-upgrade-839000: state=Stopped err=<nil>
	W0520 04:57:30.321045   21490 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:57:30.329415   21490 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-839000" ...
	I0520 04:57:30.333449   21490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:89:91:5b:65:fe -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubernetes-upgrade-839000/disk.qcow2
	I0520 04:57:30.340027   21490 main.go:141] libmachine: STDOUT: 
	I0520 04:57:30.340068   21490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:57:30.340133   21490 fix.go:56] duration metric: took 19.7715ms for fixHost
	I0520 04:57:30.340145   21490 start.go:83] releasing machines lock for "kubernetes-upgrade-839000", held for 19.94975ms
	W0520 04:57:30.340276   21490 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-839000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:57:30.348420   21490 out.go:177] 
	W0520 04:57:30.349688   21490 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:57:30.349707   21490 out.go:239] * 
	* 
	W0520 04:57:30.351482   21490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:57:30.360365   21490 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-839000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-839000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-839000 version --output=json: exit status 1 (58.257875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-839000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-05-20 04:57:30.432196 -0700 PDT m=+931.340937667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-839000 -n kubernetes-upgrade-839000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-839000 -n kubernetes-upgrade-839000: exit status 7 (31.077583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-839000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-839000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-839000
--- FAIL: TestKubernetesUpgrade (17.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.15s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18929
- KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2415588412/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.15s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.94s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18929
- KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current257971407/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2041811379 start -p stopped-upgrade-298000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2041811379 start -p stopped-upgrade-298000 --memory=2200 --vm-driver=qemu2 : (41.300111291s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2041811379 -p stopped-upgrade-298000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2041811379 -p stopped-upgrade-298000 stop: (12.114492042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-298000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-298000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.936303417s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-298000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-298000" primary control-plane node in "stopped-upgrade-298000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-298000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:58:25.069623   21535 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:58:25.069839   21535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:25.069843   21535 out.go:304] Setting ErrFile to fd 2...
	I0520 04:58:25.069846   21535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:58:25.070018   21535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:58:25.071250   21535 out.go:298] Setting JSON to false
	I0520 04:58:25.091257   21535 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":10676,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:58:25.091335   21535 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:58:25.095396   21535 out.go:177] * [stopped-upgrade-298000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:58:25.103344   21535 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:58:25.103429   21535 notify.go:220] Checking for updates...
	I0520 04:58:25.110394   21535 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:58:25.113395   21535 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:58:25.116411   21535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:58:25.119373   21535 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:58:25.122355   21535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:58:25.125661   21535 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:58:25.129373   21535 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 04:58:25.132324   21535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:58:25.136395   21535 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:58:25.142291   21535 start.go:297] selected driver: qemu2
	I0520 04:58:25.142296   21535 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:58:25.142347   21535 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:58:25.144837   21535 cni.go:84] Creating CNI manager for ""
	I0520 04:58:25.144855   21535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:58:25.144884   21535 start.go:340] cluster config:
	{Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:58:25.144939   21535 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:58:25.152408   21535 out.go:177] * Starting "stopped-upgrade-298000" primary control-plane node in "stopped-upgrade-298000" cluster
	I0520 04:58:25.156383   21535 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:58:25.156409   21535 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 04:58:25.156420   21535 cache.go:56] Caching tarball of preloaded images
	I0520 04:58:25.156485   21535 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:58:25.156492   21535 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 04:58:25.156552   21535 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/config.json ...
	I0520 04:58:25.156895   21535 start.go:360] acquireMachinesLock for stopped-upgrade-298000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:58:25.156931   21535 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "stopped-upgrade-298000"
	I0520 04:58:25.156940   21535 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:58:25.156946   21535 fix.go:54] fixHost starting: 
	I0520 04:58:25.157069   21535 fix.go:112] recreateIfNeeded on stopped-upgrade-298000: state=Stopped err=<nil>
	W0520 04:58:25.157079   21535 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:58:25.165337   21535 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-298000" ...
	I0520 04:58:25.169419   21535 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/qemu.pid -nic user,model=virtio,hostfwd=tcp::54138-:22,hostfwd=tcp::54139-:2376,hostname=stopped-upgrade-298000 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/disk.qcow2
	I0520 04:58:25.213945   21535 main.go:141] libmachine: STDOUT: 
	I0520 04:58:25.213970   21535 main.go:141] libmachine: STDERR: 
	I0520 04:58:25.213974   21535 main.go:141] libmachine: Waiting for VM to start (ssh -p 54138 docker@127.0.0.1)...
	I0520 04:58:44.609972   21535 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/config.json ...
	I0520 04:58:44.610283   21535 machine.go:94] provisionDockerMachine start ...
	I0520 04:58:44.610378   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.610621   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.610628   21535 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:58:44.673743   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:58:44.673756   21535 buildroot.go:166] provisioning hostname "stopped-upgrade-298000"
	I0520 04:58:44.673812   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.673941   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.673948   21535 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-298000 && echo "stopped-upgrade-298000" | sudo tee /etc/hostname
	I0520 04:58:44.736023   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-298000
	
	I0520 04:58:44.736075   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.736193   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.736229   21535 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-298000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-298000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-298000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:58:44.795663   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:58:44.795680   21535 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18929-19024/.minikube CaCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18929-19024/.minikube}
	I0520 04:58:44.795692   21535 buildroot.go:174] setting up certificates
	I0520 04:58:44.795701   21535 provision.go:84] configureAuth start
	I0520 04:58:44.795707   21535 provision.go:143] copyHostCerts
	I0520 04:58:44.795774   21535 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem, removing ...
	I0520 04:58:44.795784   21535 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem
	I0520 04:58:44.795894   21535 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.pem (1082 bytes)
	I0520 04:58:44.796090   21535 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem, removing ...
	I0520 04:58:44.796094   21535 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem
	I0520 04:58:44.796140   21535 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/cert.pem (1123 bytes)
	I0520 04:58:44.796250   21535 exec_runner.go:144] found /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem, removing ...
	I0520 04:58:44.796253   21535 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem
	I0520 04:58:44.796293   21535 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18929-19024/.minikube/key.pem (1675 bytes)
	I0520 04:58:44.796393   21535 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-298000 san=[127.0.0.1 localhost minikube stopped-upgrade-298000]
	I0520 04:58:44.858117   21535 provision.go:177] copyRemoteCerts
	I0520 04:58:44.858177   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:58:44.858188   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 04:58:44.887290   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 04:58:44.893785   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0520 04:58:44.900192   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 04:58:44.909095   21535 provision.go:87] duration metric: took 113.390542ms to configureAuth
	I0520 04:58:44.909106   21535 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:58:44.909222   21535 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:58:44.909263   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.909406   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.909411   21535 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:58:44.963710   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:58:44.963719   21535 buildroot.go:70] root file system type: tmpfs
	I0520 04:58:44.963768   21535 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:58:44.963813   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:44.963932   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:44.963964   21535 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:58:45.020818   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:58:45.020860   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:45.020950   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:45.020958   21535 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:58:45.381013   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:58:45.381027   21535 machine.go:97] duration metric: took 770.743041ms to provisionDockerMachine
	I0520 04:58:45.381034   21535 start.go:293] postStartSetup for "stopped-upgrade-298000" (driver="qemu2")
	I0520 04:58:45.381041   21535 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:58:45.381104   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:58:45.381114   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 04:58:45.409066   21535 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:58:45.410324   21535 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 04:58:45.410331   21535 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18929-19024/.minikube/addons for local assets ...
	I0520 04:58:45.410414   21535 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18929-19024/.minikube/files for local assets ...
	I0520 04:58:45.410514   21535 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem -> 195172.pem in /etc/ssl/certs
	I0520 04:58:45.410626   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:58:45.413077   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem --> /etc/ssl/certs/195172.pem (1708 bytes)
	I0520 04:58:45.420015   21535 start.go:296] duration metric: took 38.975666ms for postStartSetup
	I0520 04:58:45.420028   21535 fix.go:56] duration metric: took 20.26323s for fixHost
	I0520 04:58:45.420062   21535 main.go:141] libmachine: Using SSH client type: native
	I0520 04:58:45.420160   21535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044e2900] 0x1044e5160 <nil>  [] 0s} localhost 54138 <nil> <nil>}
	I0520 04:58:45.420165   21535 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 04:58:45.472877   21535 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206325.165230337
	
	I0520 04:58:45.472886   21535 fix.go:216] guest clock: 1716206325.165230337
	I0520 04:58:45.472890   21535 fix.go:229] Guest: 2024-05-20 04:58:45.165230337 -0700 PDT Remote: 2024-05-20 04:58:45.42003 -0700 PDT m=+20.383817251 (delta=-254.799663ms)
	I0520 04:58:45.472903   21535 fix.go:200] guest clock delta is within tolerance: -254.799663ms
	I0520 04:58:45.472906   21535 start.go:83] releasing machines lock for "stopped-upgrade-298000", held for 20.316117625s
	I0520 04:58:45.472962   21535 ssh_runner.go:195] Run: cat /version.json
	I0520 04:58:45.472972   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 04:58:45.472962   21535 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:58:45.473002   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	W0520 04:58:45.473602   21535 sshutil.go:64] dial failure (will retry): dial tcp [::1]:54138: connect: connection refused
	I0520 04:58:45.473626   21535 retry.go:31] will retry after 232.14207ms: dial tcp [::1]:54138: connect: connection refused
	W0520 04:58:45.750848   21535 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 04:58:45.750973   21535 ssh_runner.go:195] Run: systemctl --version
	I0520 04:58:45.754662   21535 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 04:58:45.757631   21535 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:58:45.757682   21535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 04:58:45.762696   21535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 04:58:45.770345   21535 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:58:45.770358   21535 start.go:494] detecting cgroup driver to use...
	I0520 04:58:45.770477   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:58:45.780018   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 04:58:45.784014   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:58:45.787673   21535 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:58:45.787702   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:58:45.791146   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:58:45.794513   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:58:45.797467   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:58:45.800287   21535 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:58:45.803616   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:58:45.807095   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:58:45.810024   21535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:58:45.813011   21535 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:58:45.816053   21535 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:58:45.819159   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:45.895882   21535 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:58:45.906409   21535 start.go:494] detecting cgroup driver to use...
	I0520 04:58:45.906482   21535 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:58:45.911870   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:58:45.916542   21535 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:58:45.922022   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:58:45.926118   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:58:45.930674   21535 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:58:45.972011   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:58:45.977125   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:58:45.982200   21535 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:58:45.983553   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:58:45.986330   21535 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:58:45.991260   21535 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:58:46.073029   21535 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:58:46.158514   21535 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:58:46.158587   21535 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:58:46.163869   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:46.249788   21535 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:58:47.385713   21535 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.135912958s)
	I0520 04:58:47.385778   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:58:47.390359   21535 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 04:58:47.396532   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:58:47.401733   21535 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:58:47.480969   21535 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:58:47.561256   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:47.639003   21535 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:58:47.644484   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:58:47.649065   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:47.728893   21535 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:58:47.766481   21535 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:58:47.766556   21535 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:58:47.768501   21535 start.go:562] Will wait 60s for crictl version
	I0520 04:58:47.768545   21535 ssh_runner.go:195] Run: which crictl
	I0520 04:58:47.770148   21535 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:58:47.785213   21535 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 04:58:47.785289   21535 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:58:47.806169   21535 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:58:47.822216   21535 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 04:58:47.822291   21535 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 04:58:47.823701   21535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:58:47.827896   21535 kubeadm.go:877] updating cluster {Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 04:58:47.827946   21535 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:58:47.827975   21535 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:58:47.839600   21535 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:58:47.839609   21535 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:58:47.839637   21535 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:58:47.843656   21535 ssh_runner.go:195] Run: which lz4
	I0520 04:58:47.845058   21535 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 04:58:47.846408   21535 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 04:58:47.846423   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 04:58:48.638132   21535 docker.go:649] duration metric: took 793.107292ms to copy over tarball
	I0520 04:58:48.638187   21535 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:58:49.849071   21535 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.210875958s)
	I0520 04:58:49.849088   21535 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:58:49.866678   21535 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:58:49.869989   21535 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 04:58:49.875246   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:49.958525   21535 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:58:51.502955   21535 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5444235s)
	I0520 04:58:51.503054   21535 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:58:51.526060   21535 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:58:51.526069   21535 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:58:51.526074   21535 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 04:58:51.532405   21535 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.532484   21535 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.532534   21535 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:51.532533   21535 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:51.532609   21535 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:51.532618   21535 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:51.532865   21535 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:58:51.532898   21535 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:51.540512   21535 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:58:51.540599   21535 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.540665   21535 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.540780   21535 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:51.541196   21535 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:51.541377   21535 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:51.541386   21535 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:51.541474   21535 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:51.952812   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 04:58:51.959888   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.966441   21535 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 04:58:51.966461   21535 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 04:58:51.966514   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 04:58:51.968232   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.972076   21535 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 04:58:51.972095   21535 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.972143   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:58:51.980765   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:51.989363   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:58:51.989480   21535 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0520 04:58:51.991476   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 04:58:51.991577   21535 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 04:58:51.991595   21535 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:51.991634   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:58:52.000352   21535 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 04:58:52.000357   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 04:58:52.000380   21535 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:58:52.000389   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 04:58:52.000421   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0520 04:58:52.000964   21535 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:58:52.001073   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:52.009099   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 04:58:52.016860   21535 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 04:58:52.016874   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 04:58:52.019783   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 04:58:52.019836   21535 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 04:58:52.019851   21535 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:52.019898   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:58:52.020100   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:52.044694   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:52.065165   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 04:58:52.065201   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 04:58:52.065219   21535 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 04:58:52.065237   21535 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:52.065284   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 04:58:52.065304   21535 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 04:58:52.065306   21535 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:58:52.065316   21535 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:52.065342   21535 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:58:52.066714   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 04:58:52.066731   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 04:58:52.095724   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:58:52.095821   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 04:58:52.095847   21535 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:58:52.103673   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 04:58:52.103697   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 04:58:52.105332   21535 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:58:52.105340   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 04:58:52.240930   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 04:58:52.289646   21535 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:58:52.289659   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0520 04:58:52.356214   21535 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:58:52.356331   21535 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:52.434111   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0520 04:58:52.434134   21535 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 04:58:52.434161   21535 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:52.434216   21535 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:58:52.448298   21535 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:58:52.448410   21535 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:58:52.449933   21535 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 04:58:52.449948   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 04:58:52.479750   21535 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:58:52.479763   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 04:58:52.716386   21535 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 04:58:52.716427   21535 cache_images.go:92] duration metric: took 1.1903555s to LoadCachedImages
	W0520 04:58:52.716469   21535 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0520 04:58:52.716475   21535 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 04:58:52.716527   21535 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-298000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:58:52.716587   21535 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:58:52.735249   21535 cni.go:84] Creating CNI manager for ""
	I0520 04:58:52.735261   21535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:58:52.735268   21535 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:58:52.735332   21535 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-298000 NodeName:stopped-upgrade-298000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:58:52.735408   21535 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-298000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:58:52.735461   21535 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 04:58:52.738957   21535 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:58:52.738986   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 04:58:52.741592   21535 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 04:58:52.746388   21535 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:58:52.751281   21535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 04:58:52.757038   21535 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 04:58:52.758418   21535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:58:52.761883   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:58:52.843145   21535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:58:52.850221   21535 certs.go:68] Setting up /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000 for IP: 10.0.2.15
	I0520 04:58:52.850229   21535 certs.go:194] generating shared ca certs ...
	I0520 04:58:52.850238   21535 certs.go:226] acquiring lock for ca certs: {Name:mk319383c68f33c5310e8442d826dee5d3ed7b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:52.850402   21535 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.key
	I0520 04:58:52.850437   21535 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.key
	I0520 04:58:52.850442   21535 certs.go:256] generating profile certs ...
	I0520 04:58:52.850508   21535 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.key
	I0520 04:58:52.850526   21535 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7
	I0520 04:58:52.850537   21535 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 04:58:53.022678   21535 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7 ...
	I0520 04:58:53.022689   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7: {Name:mk7049d0be65a263299d9c17e36039183748ec76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:53.023611   21535 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7 ...
	I0520 04:58:53.023620   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7: {Name:mk09b4e706952e42d7f87718e4d179ce5362915a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:53.023770   21535 certs.go:381] copying /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt.db4cb5d7 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt
	I0520 04:58:53.023903   21535 certs.go:385] copying /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key.db4cb5d7 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key
	I0520 04:58:53.024042   21535 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/proxy-client.key
	I0520 04:58:53.024171   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517.pem (1338 bytes)
	W0520 04:58:53.024191   21535 certs.go:480] ignoring /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517_empty.pem, impossibly tiny 0 bytes
	I0520 04:58:53.024196   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 04:58:53.024219   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem (1082 bytes)
	I0520 04:58:53.024237   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem (1123 bytes)
	I0520 04:58:53.024257   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/key.pem (1675 bytes)
	I0520 04:58:53.024294   21535 certs.go:484] found cert: /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem (1708 bytes)
	I0520 04:58:53.024616   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:58:53.031801   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 04:58:53.039620   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:58:53.046418   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 04:58:53.053191   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 04:58:53.059945   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 04:58:53.067138   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:58:53.074020   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 04:58:53.080488   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/ssl/certs/195172.pem --> /usr/share/ca-certificates/195172.pem (1708 bytes)
	I0520 04:58:53.087453   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:58:53.094189   21535 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/19517.pem --> /usr/share/ca-certificates/19517.pem (1338 bytes)
	I0520 04:58:53.100787   21535 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:58:53.105774   21535 ssh_runner.go:195] Run: openssl version
	I0520 04:58:53.107600   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19517.pem && ln -fs /usr/share/ca-certificates/19517.pem /etc/ssl/certs/19517.pem"
	I0520 04:58:53.110805   21535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19517.pem
	I0520 04:58:53.112243   21535 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 11:42 /usr/share/ca-certificates/19517.pem
	I0520 04:58:53.112266   21535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19517.pem
	I0520 04:58:53.114088   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19517.pem /etc/ssl/certs/51391683.0"
	I0520 04:58:53.116928   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/195172.pem && ln -fs /usr/share/ca-certificates/195172.pem /etc/ssl/certs/195172.pem"
	I0520 04:58:53.120197   21535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/195172.pem
	I0520 04:58:53.121633   21535 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 11:42 /usr/share/ca-certificates/195172.pem
	I0520 04:58:53.121656   21535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/195172.pem
	I0520 04:58:53.123279   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/195172.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:58:53.126271   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:58:53.129085   21535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:58:53.130509   21535 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:54 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:58:53.130528   21535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:58:53.132211   21535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:58:53.135444   21535 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:58:53.136921   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 04:58:53.139121   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 04:58:53.141073   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 04:58:53.143039   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 04:58:53.144734   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 04:58:53.146504   21535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 04:58:53.148277   21535 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54172 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:58:53.148340   21535 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:58:53.158428   21535 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 04:58:53.161291   21535 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 04:58:53.161297   21535 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 04:58:53.161300   21535 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 04:58:53.161322   21535 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 04:58:53.164101   21535 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:58:53.164393   21535 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-298000" does not appear in /Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:58:53.164486   21535 kubeconfig.go:62] /Users/jenkins/minikube-integration/18929-19024/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-298000" cluster setting kubeconfig missing "stopped-upgrade-298000" context setting]
	I0520 04:58:53.164668   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/kubeconfig: {Name:mk3ada957134ebfd6ba10dc19bcfe4b23657e56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:58:53.165087   21535 kapi.go:59] client config for stopped-upgrade-298000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.key", CAFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10586c580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:58:53.165395   21535 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 04:58:53.168123   21535 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-298000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 04:58:53.168129   21535 kubeadm.go:1154] stopping kube-system containers ...
	I0520 04:58:53.168168   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:58:53.178927   21535 docker.go:483] Stopping containers: [9fc3ef3cc4af 8c289c175a53 1c19435c85dd aa9323402490 6730da3d3f1a c9cc7b978cad b2100d7c0bd2 df4e1107aafa]
	I0520 04:58:53.178993   21535 ssh_runner.go:195] Run: docker stop 9fc3ef3cc4af 8c289c175a53 1c19435c85dd aa9323402490 6730da3d3f1a c9cc7b978cad b2100d7c0bd2 df4e1107aafa
	I0520 04:58:53.189128   21535 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 04:58:53.194841   21535 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:58:53.197519   21535 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:58:53.197524   21535 kubeadm.go:156] found existing configuration files:
	
	I0520 04:58:53.197543   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf
	I0520 04:58:53.200215   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:58:53.200236   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:58:53.203085   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf
	I0520 04:58:53.205637   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:58:53.205669   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:58:53.208165   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf
	I0520 04:58:53.211033   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:58:53.211053   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:58:53.213597   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf
	I0520 04:58:53.216081   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:58:53.216100   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:58:53.219138   21535 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:58:53.221709   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.246003   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.734300   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.864798   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.895106   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:58:53.919515   21535 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:58:53.919591   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:58:54.421763   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:58:54.921679   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:58:54.926089   21535 api_server.go:72] duration metric: took 1.006583583s to wait for apiserver process to appear ...
	I0520 04:58:54.926098   21535 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:58:54.926106   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:58:59.928216   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:58:59.928258   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:04.928499   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:04.928538   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:09.929301   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:09.929351   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:14.930062   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:14.930116   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:19.930953   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:19.930972   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:24.931978   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:24.932076   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:29.933751   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:29.933802   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:34.935030   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:34.935051   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:39.937071   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:39.937137   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:44.939336   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:44.939384   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:49.941769   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:49.941836   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:59:54.944352   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:59:54.944632   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:59:54.975503   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 04:59:54.975638   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:59:54.999499   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 04:59:54.999589   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:59:55.012215   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 04:59:55.012288   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:59:55.023463   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 04:59:55.023528   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:59:55.036521   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 04:59:55.036592   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:59:55.048550   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 04:59:55.048624   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:59:55.059026   21535 logs.go:276] 0 containers: []
	W0520 04:59:55.059037   21535 logs.go:278] No container was found matching "kindnet"
	I0520 04:59:55.059095   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:59:55.069975   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 04:59:55.073559   21535 logs.go:123] Gathering logs for container status ...
	I0520 04:59:55.073565   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:59:55.087697   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 04:59:55.087708   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 04:59:55.103779   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 04:59:55.103791   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 04:59:55.121591   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 04:59:55.121600   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 04:59:55.135964   21535 logs.go:123] Gathering logs for Docker ...
	I0520 04:59:55.135974   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:59:55.161671   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 04:59:55.161688   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:59:55.202062   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 04:59:55.202070   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 04:59:55.216811   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 04:59:55.216825   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 04:59:55.259568   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 04:59:55.259581   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 04:59:55.275617   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 04:59:55.275634   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 04:59:55.287030   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 04:59:55.287039   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 04:59:55.298367   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 04:59:55.298380   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 04:59:55.310406   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 04:59:55.310415   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 04:59:55.321846   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 04:59:55.321856   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:59:55.326155   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:59:55.326161   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:59:55.440985   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 04:59:55.440999   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 04:59:55.455224   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 04:59:55.455237   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 04:59:57.969023   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:02.971404   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:02.971689   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:03.001701   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:03.001828   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:03.018530   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:03.018634   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:03.031153   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:03.031222   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:03.042119   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:03.042191   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:03.052189   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:03.052251   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:03.063201   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:03.063270   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:03.074276   21535 logs.go:276] 0 containers: []
	W0520 05:00:03.074292   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:03.074350   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:03.084810   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:03.084832   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:03.084837   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:03.127011   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:03.127027   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:03.165887   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:03.165901   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:03.182014   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:03.182025   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:03.193622   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:03.193633   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:03.205669   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:03.205680   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:03.221321   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:03.221333   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:03.232659   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:03.232668   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:03.250291   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:03.250301   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:03.264449   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:03.264459   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:03.280983   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:03.280992   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:03.296402   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:03.296412   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:03.321149   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:03.321157   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:03.359292   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:03.359300   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:03.363418   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:03.363423   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:03.378904   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:03.378914   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:03.393209   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:03.393220   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:05.907507   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:10.909970   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:10.910182   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:10.938324   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:10.938419   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:10.953268   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:10.953344   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:10.965437   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:10.965509   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:10.976370   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:10.976437   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:10.986939   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:10.987004   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:10.997307   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:10.997380   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:11.007452   21535 logs.go:276] 0 containers: []
	W0520 05:00:11.007464   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:11.007521   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:11.017948   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:11.017968   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:11.017975   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:11.030855   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:11.030869   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:11.035477   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:11.035484   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:11.052228   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:11.052237   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:11.064438   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:11.064448   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:11.076105   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:11.076116   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:11.096464   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:11.096473   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:11.114158   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:11.114169   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:11.128139   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:11.128150   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:11.145267   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:11.145280   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:11.160150   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:11.160162   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:11.200042   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:11.200053   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:11.211587   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:11.211599   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:11.236867   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:11.236877   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:11.251789   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:11.251802   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:11.264273   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:11.264283   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:11.302862   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:11.302872   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:13.842510   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:18.844764   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:18.844911   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:18.858804   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:18.858885   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:18.870465   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:18.870533   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:18.880833   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:18.880895   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:18.891442   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:18.891505   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:18.901405   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:18.901477   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:18.911622   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:18.911696   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:18.926301   21535 logs.go:276] 0 containers: []
	W0520 05:00:18.926310   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:18.926361   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:18.936135   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:18.936154   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:18.936159   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:18.949988   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:18.949999   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:18.961946   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:18.961958   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:18.987412   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:18.987423   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:18.991757   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:18.991764   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:19.011162   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:19.011174   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:19.022968   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:19.022979   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:19.034570   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:19.034581   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:19.071312   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:19.071325   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:19.088762   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:19.088771   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:19.102736   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:19.102749   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:19.123185   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:19.123196   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:19.134574   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:19.134585   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:19.146427   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:19.146437   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:19.164665   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:19.164675   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:19.178824   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:19.178836   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:19.217470   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:19.217490   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:21.758035   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:26.760309   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:26.760565   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:26.784786   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:26.784895   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:26.801733   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:26.801814   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:26.815021   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:26.815097   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:26.826491   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:26.826570   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:26.837055   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:26.837121   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:26.846984   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:26.847056   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:26.856471   21535 logs.go:276] 0 containers: []
	W0520 05:00:26.856484   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:26.856541   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:26.866907   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:26.866926   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:26.866933   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:26.878287   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:26.878298   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:26.897836   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:26.897849   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:26.909610   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:26.909621   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:26.913654   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:26.913662   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:26.950054   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:26.950065   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:26.964763   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:26.964774   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:26.977158   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:26.977171   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:26.991660   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:26.991670   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:27.032796   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:27.032814   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:27.044288   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:27.044302   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:27.059544   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:27.059559   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:27.097218   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:27.097233   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:27.112545   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:27.112561   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:27.127132   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:27.127146   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:27.143914   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:27.143924   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:27.155096   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:27.155106   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:29.681691   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:34.684033   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:34.684227   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:34.701513   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:34.701597   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:34.714807   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:34.714880   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:34.725972   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:34.726043   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:34.736612   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:34.736681   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:34.746563   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:34.746618   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:34.757401   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:34.757466   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:34.767773   21535 logs.go:276] 0 containers: []
	W0520 05:00:34.767785   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:34.767846   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:34.778712   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:34.778731   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:34.778737   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:34.792066   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:34.792076   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:34.803618   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:34.803629   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:34.817312   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:34.817322   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:34.836415   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:34.836424   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:34.849902   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:34.849912   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:34.868142   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:34.868152   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:34.882293   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:34.882303   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:34.894292   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:34.894301   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:34.932961   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:34.932973   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:34.937131   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:34.937140   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:34.951836   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:34.951845   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:34.968262   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:34.968271   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:34.981431   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:34.981441   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:34.992851   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:34.992859   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:35.016018   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:35.016025   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:35.052478   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:35.052488   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:37.593147   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:42.595753   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:42.595942   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:42.612287   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:42.612368   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:42.626442   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:42.626510   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:42.636427   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:42.636489   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:42.646896   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:42.646968   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:42.657144   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:42.657213   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:42.667752   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:42.667821   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:42.678225   21535 logs.go:276] 0 containers: []
	W0520 05:00:42.678238   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:42.678297   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:42.688639   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:42.688659   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:42.688664   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:42.726485   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:42.726491   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:42.744541   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:42.744551   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:42.769431   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:42.769439   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:42.773847   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:42.773853   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:42.789263   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:42.789274   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:42.803727   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:42.803741   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:42.814996   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:42.815008   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:42.832548   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:42.832562   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:42.867596   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:42.867607   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:42.879827   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:42.879840   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:42.891247   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:42.891261   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:42.909257   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:42.909270   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:42.920861   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:42.920873   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:42.935823   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:42.935830   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:42.982969   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:42.982980   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:42.995401   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:42.995410   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:45.509265   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:50.511969   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:50.512204   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:50.539002   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:50.539119   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:50.555845   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:50.555929   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:50.569298   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:50.569377   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:50.580574   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:50.580634   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:50.591032   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:50.591089   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:50.605314   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:50.605385   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:50.615778   21535 logs.go:276] 0 containers: []
	W0520 05:00:50.615791   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:50.615850   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:50.627016   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:50.627033   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:50.627038   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:50.639434   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:50.639448   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:00:50.678712   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:50.678721   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:50.693648   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:50.693660   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:50.717224   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:50.717234   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:50.729019   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:50.729029   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:50.752246   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:50.752262   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:50.764171   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:50.764184   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:50.777012   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:50.777024   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:50.792206   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:50.792218   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:50.807117   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:50.807128   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:50.846559   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:50.846567   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:50.886820   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:50.886841   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:50.901762   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:50.901778   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:50.914918   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:50.914930   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:50.927451   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:50.927468   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:50.932101   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:50.932114   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:53.449115   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:00:58.450292   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:00:58.450500   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:00:58.473680   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:00:58.473769   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:00:58.486703   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:00:58.486780   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:00:58.498955   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:00:58.499024   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:00:58.509474   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:00:58.509548   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:00:58.524170   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:00:58.524234   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:00:58.534959   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:00:58.535026   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:00:58.545037   21535 logs.go:276] 0 containers: []
	W0520 05:00:58.545050   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:00:58.545102   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:00:58.559595   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:00:58.559617   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:00:58.559622   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:00:58.571921   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:00:58.571932   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:00:58.585214   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:00:58.585225   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:00:58.603071   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:00:58.603081   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:00:58.627672   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:00:58.627690   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:00:58.632632   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:00:58.632642   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:00:58.645374   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:00:58.645384   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:00:58.664284   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:00:58.664295   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:00:58.704027   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:00:58.704039   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:00:58.720513   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:00:58.720524   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:00:58.735629   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:00:58.735638   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:00:58.748118   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:00:58.748133   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:00:58.787697   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:00:58.787711   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:00:58.802671   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:00:58.802686   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:00:58.814586   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:00:58.814598   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:00:58.829011   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:00:58.829023   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:00:58.846741   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:00:58.846751   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:01.388521   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:06.390896   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:06.391288   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:06.424067   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:06.424202   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:06.443891   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:06.443977   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:06.459405   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:06.459477   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:06.474862   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:06.474934   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:06.492126   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:06.492194   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:06.504121   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:06.504193   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:06.515892   21535 logs.go:276] 0 containers: []
	W0520 05:01:06.515905   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:06.515962   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:06.527881   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:06.527899   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:06.527904   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:06.540272   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:06.540288   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:06.566799   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:06.566807   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:06.581748   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:06.581761   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:06.602468   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:06.602480   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:06.618221   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:06.618233   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:06.666126   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:06.666146   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:06.682744   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:06.682761   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:06.701434   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:06.701450   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:06.722290   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:06.722300   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:06.734435   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:06.734447   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:06.772606   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:06.772619   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:06.777574   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:06.777587   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:06.791266   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:06.791277   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:06.803665   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:06.803677   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:06.824207   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:06.824221   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:06.835654   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:06.835664   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:09.376446   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:14.378626   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:14.378687   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:14.389930   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:14.390003   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:14.405587   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:14.405660   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:14.416718   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:14.416789   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:14.428083   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:14.428153   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:14.439400   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:14.439475   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:14.450670   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:14.450739   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:14.461274   21535 logs.go:276] 0 containers: []
	W0520 05:01:14.461286   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:14.461349   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:14.475888   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:14.475909   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:14.475915   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:14.480612   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:14.480621   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:14.517601   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:14.517613   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:14.540886   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:14.540904   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:14.553869   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:14.553881   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:14.578899   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:14.578914   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:14.619706   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:14.619715   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:14.634842   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:14.634853   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:14.648122   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:14.648135   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:14.664102   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:14.664113   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:14.689467   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:14.689479   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:14.701938   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:14.701950   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:14.719275   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:14.719289   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:14.733295   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:14.733308   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:14.744803   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:14.744812   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:14.783944   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:14.783957   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:14.795635   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:14.795646   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:17.312262   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:22.313034   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:22.313081   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:22.324155   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:22.324232   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:22.337555   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:22.337621   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:22.349760   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:22.349822   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:22.361839   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:22.361905   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:22.373063   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:22.373124   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:22.384993   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:22.385061   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:22.396158   21535 logs.go:276] 0 containers: []
	W0520 05:01:22.396169   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:22.396233   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:22.408213   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:22.408232   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:22.408237   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:22.433768   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:22.433778   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:22.474907   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:22.474924   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:22.493227   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:22.493239   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:22.509788   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:22.509801   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:22.522182   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:22.522198   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:22.537143   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:22.537155   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:22.551646   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:22.551656   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:22.565261   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:22.565277   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:22.577638   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:22.577652   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:22.593764   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:22.593782   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:22.632493   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:22.632508   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:22.646918   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:22.646929   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:22.658386   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:22.658400   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:22.671897   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:22.671915   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:22.684117   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:22.684133   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:22.688788   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:22.688794   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:25.227304   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:30.229511   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:30.229615   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:30.241115   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:30.241194   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:30.253065   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:30.253137   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:30.264454   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:30.264529   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:30.275893   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:30.275959   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:30.287368   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:30.287436   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:30.299877   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:30.299953   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:30.310730   21535 logs.go:276] 0 containers: []
	W0520 05:01:30.310740   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:30.310798   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:30.322549   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:30.322566   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:30.322570   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:30.341809   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:30.341818   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:30.354961   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:30.354971   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:30.373880   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:30.373890   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:30.390050   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:30.390061   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:30.430063   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:30.430076   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:30.434474   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:30.434483   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:30.475032   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:30.475042   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:30.486533   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:30.486549   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:30.502570   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:30.502583   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:30.517954   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:30.517966   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:30.531303   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:30.531318   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:30.553945   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:30.553952   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:30.593151   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:30.593164   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:30.607507   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:30.607521   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:30.618665   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:30.618679   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:30.630595   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:30.630606   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:33.143425   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:38.145744   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:38.145850   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:38.157370   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:38.157446   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:38.168397   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:38.168468   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:38.179988   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:38.180111   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:38.191083   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:38.191159   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:38.207494   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:38.207567   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:38.224563   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:38.224639   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:38.235692   21535 logs.go:276] 0 containers: []
	W0520 05:01:38.235705   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:38.235774   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:38.247182   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:38.247200   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:38.247205   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:38.285927   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:38.285938   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:38.303105   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:38.303118   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:38.307754   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:38.307761   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:38.345958   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:38.345974   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:38.358188   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:38.358199   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:38.375436   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:38.375449   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:38.399295   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:38.399305   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:38.438653   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:38.438662   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:38.456670   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:38.456681   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:38.477659   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:38.477671   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:38.490345   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:38.490356   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:38.505034   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:38.505046   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:38.516620   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:38.516631   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:38.531497   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:38.531506   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:38.543702   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:38.543715   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:38.555223   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:38.555235   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:41.068328   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:46.070487   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:46.070614   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:46.081956   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:46.082033   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:46.093899   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:46.093977   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:46.105791   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:46.105867   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:46.117601   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:46.117697   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:46.130748   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:46.130817   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:46.141925   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:46.141991   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:46.152615   21535 logs.go:276] 0 containers: []
	W0520 05:01:46.152624   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:46.152686   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:46.164188   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:46.164209   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:46.164215   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:46.178163   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:46.178174   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:46.191415   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:46.191425   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:46.228828   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:46.228841   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:46.240776   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:46.240786   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:46.251939   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:46.251950   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:46.274830   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:46.274839   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:46.286482   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:46.286492   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:46.323318   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:46.323325   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:46.361187   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:46.361201   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:46.375569   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:46.375580   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:46.390061   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:46.390074   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:46.402251   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:46.402260   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:46.417645   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:46.417658   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:46.421716   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:46.421725   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:46.436991   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:46.437003   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:46.454095   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:46.454108   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:48.970195   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:01:53.972388   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:01:53.972576   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:01:53.988098   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:01:53.988171   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:01:53.999507   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:01:53.999590   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:01:54.011107   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:01:54.011181   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:01:54.022351   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:01:54.022436   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:01:54.037571   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:01:54.037648   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:01:54.048716   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:01:54.048788   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:01:54.058939   21535 logs.go:276] 0 containers: []
	W0520 05:01:54.058955   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:01:54.059010   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:01:54.070000   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:01:54.070018   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:01:54.070023   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:01:54.107551   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:01:54.107560   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:01:54.121889   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:01:54.121904   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:01:54.135900   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:01:54.135913   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:01:54.147584   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:01:54.147593   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:01:54.164068   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:01:54.164081   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:01:54.175760   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:01:54.175772   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:01:54.187576   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:01:54.187589   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:01:54.191774   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:01:54.191804   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:01:54.203003   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:01:54.203013   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:01:54.214588   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:01:54.214597   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:01:54.228441   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:01:54.228455   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:01:54.240152   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:01:54.240161   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:01:54.263748   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:01:54.263756   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:01:54.299479   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:01:54.299490   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:01:54.337095   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:01:54.337107   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:01:54.354296   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:01:54.354307   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:01:56.873389   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:01.875732   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:01.875842   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:01.887594   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:01.887667   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:01.899484   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:01.899561   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:01.911334   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:01.911405   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:01.923177   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:01.923253   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:01.933881   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:01.933951   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:01.945100   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:01.945166   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:01.955436   21535 logs.go:276] 0 containers: []
	W0520 05:02:01.955447   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:01.955500   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:01.966581   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:01.966601   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:01.966607   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:01.980185   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:01.980195   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:01.995458   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:01.995471   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:02.009837   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:02.009847   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:02.022042   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:02.022056   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:02.060962   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:02.060971   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:02.098540   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:02.098554   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:02.113100   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:02.113113   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:02.130957   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:02.130966   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:02.141923   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:02.141937   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:02.154846   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:02.154857   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:02.166926   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:02.166935   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:02.171529   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:02.171535   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:02.205381   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:02.205394   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:02.219524   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:02.219538   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:02.239817   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:02.239831   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:02.251505   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:02.251514   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:04.776222   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:09.776489   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:09.776579   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:09.789029   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:09.789100   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:09.799933   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:09.800009   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:09.815981   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:09.816055   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:09.827074   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:09.827150   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:09.837844   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:09.837916   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:09.848630   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:09.848709   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:09.859694   21535 logs.go:276] 0 containers: []
	W0520 05:02:09.859706   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:09.859767   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:09.870307   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:09.870327   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:09.870334   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:09.909078   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:09.909086   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:09.944482   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:09.944494   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:09.956397   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:09.956407   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:09.967516   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:09.967526   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:09.981700   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:09.981710   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:10.019610   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:10.019624   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:10.033790   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:10.033800   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:10.049388   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:10.049401   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:10.064887   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:10.064897   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:10.079405   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:10.079418   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:10.091017   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:10.091029   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:10.113954   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:10.113964   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:10.126213   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:10.126223   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:10.130516   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:10.130522   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:10.145851   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:10.145860   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:10.161374   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:10.161385   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:12.683572   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:17.685841   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:17.685931   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:17.696659   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:17.696733   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:17.707047   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:17.707125   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:17.717689   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:17.717749   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:17.727671   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:17.727749   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:17.738178   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:17.738254   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:17.748769   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:17.748830   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:17.759411   21535 logs.go:276] 0 containers: []
	W0520 05:02:17.759422   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:17.759484   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:17.770453   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:17.770471   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:17.770476   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:17.809397   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:17.809408   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:17.824387   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:17.824397   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:17.835907   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:17.835918   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:17.847671   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:17.847679   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:17.884860   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:17.884875   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:17.898893   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:17.898905   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:17.912408   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:17.912418   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:17.929853   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:17.929862   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:17.952828   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:17.952833   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:17.987775   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:17.987784   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:18.001952   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:18.001961   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:18.017322   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:18.017332   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:18.030959   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:18.030968   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:18.041837   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:18.041849   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:18.054070   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:18.054080   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:18.058278   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:18.058284   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:20.572287   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:25.575504   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:25.575592   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:25.587150   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:25.587224   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:25.598039   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:25.598103   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:25.608668   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:25.608740   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:25.622342   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:25.622419   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:25.632285   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:25.632350   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:25.642815   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:25.642888   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:25.652939   21535 logs.go:276] 0 containers: []
	W0520 05:02:25.652956   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:25.653012   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:25.663166   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:25.663185   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:25.663191   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:25.704853   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:25.704864   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:25.747606   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:25.747622   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:25.760520   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:25.760531   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:25.772480   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:25.772492   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:25.784718   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:25.784729   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:25.796704   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:25.796717   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:25.810016   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:25.810026   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:25.814248   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:25.814254   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:25.827919   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:25.827929   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:25.842764   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:25.842774   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:25.859516   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:25.859526   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:25.870667   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:25.870680   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:25.891968   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:25.891976   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:25.928539   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:25.928546   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:25.945300   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:25.945310   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:25.959786   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:25.959798   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:28.473402   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:33.475605   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:33.475689   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:33.486213   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:33.486287   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:33.496437   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:33.496501   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:33.507872   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:33.507942   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:33.518460   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:33.518522   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:33.528741   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:33.528807   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:33.539491   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:33.539554   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:33.553805   21535 logs.go:276] 0 containers: []
	W0520 05:02:33.553820   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:33.553882   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:33.564014   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:33.564034   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:33.564040   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:33.568714   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:33.568722   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:33.585895   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:33.585905   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:33.601253   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:33.601265   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:33.613883   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:33.613894   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:33.625697   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:33.625708   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:33.640069   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:33.640082   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:33.656267   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:33.656276   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:33.670224   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:33.670235   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:33.707148   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:33.707159   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:33.720972   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:33.720982   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:33.732291   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:33.732301   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:33.755801   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:33.755809   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:33.768759   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:33.768771   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:33.804028   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:33.804042   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:33.816805   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:33.816817   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:33.838467   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:33.838481   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:36.377964   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:41.380258   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:41.380401   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:41.390882   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:41.390947   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:41.401461   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:41.401533   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:41.412038   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:41.412108   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:41.422601   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:41.422673   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:41.433269   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:41.433335   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:41.443591   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:41.443659   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:41.454243   21535 logs.go:276] 0 containers: []
	W0520 05:02:41.454254   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:41.454310   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:41.464421   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:41.464439   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:41.464444   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:41.483515   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:41.483526   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:41.503097   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:41.503106   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:41.515679   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:41.515689   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:41.527283   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:41.527293   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:41.531758   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:41.531765   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:41.565739   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:41.565753   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:41.577748   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:41.577762   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:41.591839   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:41.591850   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:41.608581   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:41.608592   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:41.621905   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:41.621915   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:41.642516   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:41.642527   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:41.658059   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:41.658068   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:41.696804   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:41.696811   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:41.709046   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:41.709057   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:41.731907   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:41.731913   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:41.772059   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:41.772070   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:44.288971   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:49.291224   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:49.291304   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:02:49.301794   21535 logs.go:276] 2 containers: [e25271f38c5c 1c19435c85dd]
	I0520 05:02:49.301871   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:02:49.312675   21535 logs.go:276] 2 containers: [f529979573b9 8c289c175a53]
	I0520 05:02:49.312746   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:02:49.322666   21535 logs.go:276] 1 containers: [8e33dc772ea6]
	I0520 05:02:49.322730   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:02:49.338869   21535 logs.go:276] 2 containers: [2edf1f7bebc4 aa9323402490]
	I0520 05:02:49.338953   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:02:49.352138   21535 logs.go:276] 1 containers: [124fbfc13c7b]
	I0520 05:02:49.352215   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:02:49.364282   21535 logs.go:276] 2 containers: [f11b2bde4d74 9fc3ef3cc4af]
	I0520 05:02:49.364352   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:02:49.374328   21535 logs.go:276] 0 containers: []
	W0520 05:02:49.374340   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:02:49.374403   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:02:49.384766   21535 logs.go:276] 2 containers: [ab8cbca1c602 e6fea4085caf]
	I0520 05:02:49.384785   21535 logs.go:123] Gathering logs for etcd [f529979573b9] ...
	I0520 05:02:49.384791   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f529979573b9"
	I0520 05:02:49.402271   21535 logs.go:123] Gathering logs for etcd [8c289c175a53] ...
	I0520 05:02:49.402280   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c289c175a53"
	I0520 05:02:49.416772   21535 logs.go:123] Gathering logs for kube-controller-manager [f11b2bde4d74] ...
	I0520 05:02:49.416781   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11b2bde4d74"
	I0520 05:02:49.433704   21535 logs.go:123] Gathering logs for kube-controller-manager [9fc3ef3cc4af] ...
	I0520 05:02:49.433713   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fc3ef3cc4af"
	I0520 05:02:49.447047   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:02:49.447058   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:02:49.470451   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:02:49.470460   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:02:49.483691   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:02:49.483704   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:02:49.521842   21535 logs.go:123] Gathering logs for kube-apiserver [e25271f38c5c] ...
	I0520 05:02:49.521850   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e25271f38c5c"
	I0520 05:02:49.535317   21535 logs.go:123] Gathering logs for kube-apiserver [1c19435c85dd] ...
	I0520 05:02:49.535331   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c19435c85dd"
	I0520 05:02:49.573389   21535 logs.go:123] Gathering logs for coredns [8e33dc772ea6] ...
	I0520 05:02:49.573403   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e33dc772ea6"
	I0520 05:02:49.584634   21535 logs.go:123] Gathering logs for kube-scheduler [aa9323402490] ...
	I0520 05:02:49.584644   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9323402490"
	I0520 05:02:49.601612   21535 logs.go:123] Gathering logs for kube-proxy [124fbfc13c7b] ...
	I0520 05:02:49.601626   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124fbfc13c7b"
	I0520 05:02:49.612901   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:02:49.612913   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:02:49.617065   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:02:49.617072   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:02:49.653456   21535 logs.go:123] Gathering logs for storage-provisioner [e6fea4085caf] ...
	I0520 05:02:49.653472   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6fea4085caf"
	I0520 05:02:49.664620   21535 logs.go:123] Gathering logs for kube-scheduler [2edf1f7bebc4] ...
	I0520 05:02:49.664630   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2edf1f7bebc4"
	I0520 05:02:49.676278   21535 logs.go:123] Gathering logs for storage-provisioner [ab8cbca1c602] ...
	I0520 05:02:49.676293   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab8cbca1c602"
	I0520 05:02:52.190495   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:02:57.192812   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:02:57.192856   21535 kubeadm.go:591] duration metric: took 4m4.033322208s to restartPrimaryControlPlane
	W0520 05:02:57.192905   21535 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 05:02:57.192925   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 05:02:58.195025   21535 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002095833s)
	I0520 05:02:58.195095   21535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 05:02:58.200147   21535 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 05:02:58.203049   21535 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 05:02:58.206012   21535 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 05:02:58.206020   21535 kubeadm.go:156] found existing configuration files:
	
	I0520 05:02:58.206044   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf
	I0520 05:02:58.208967   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 05:02:58.208996   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 05:02:58.211632   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf
	I0520 05:02:58.214361   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 05:02:58.214393   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 05:02:58.217579   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf
	I0520 05:02:58.220617   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 05:02:58.220641   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 05:02:58.223171   21535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf
	I0520 05:02:58.225853   21535 kubeadm.go:162] "https://control-plane.minikube.internal:54172" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:54172 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 05:02:58.225875   21535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 05:02:58.229013   21535 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 05:02:58.246051   21535 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 05:02:58.246131   21535 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 05:02:58.300143   21535 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 05:02:58.300194   21535 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 05:02:58.300242   21535 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 05:02:58.352992   21535 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 05:02:58.360998   21535 out.go:204]   - Generating certificates and keys ...
	I0520 05:02:58.361033   21535 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 05:02:58.361106   21535 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 05:02:58.361166   21535 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 05:02:58.361227   21535 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 05:02:58.361314   21535 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 05:02:58.361352   21535 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 05:02:58.361414   21535 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 05:02:58.361485   21535 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 05:02:58.361532   21535 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 05:02:58.361626   21535 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 05:02:58.361657   21535 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 05:02:58.361726   21535 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 05:02:58.416956   21535 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 05:02:58.600489   21535 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 05:02:58.659640   21535 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 05:02:58.696756   21535 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 05:02:58.726654   21535 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 05:02:58.727196   21535 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 05:02:58.727225   21535 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 05:02:58.815514   21535 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 05:02:58.819745   21535 out.go:204]   - Booting up control plane ...
	I0520 05:02:58.819792   21535 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 05:02:58.819832   21535 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 05:02:58.819862   21535 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 05:02:58.819912   21535 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 05:02:58.822495   21535 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 05:03:03.325590   21535 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.502410 seconds
	I0520 05:03:03.325698   21535 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 05:03:03.331174   21535 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 05:03:03.838947   21535 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 05:03:03.839057   21535 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-298000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 05:03:04.342791   21535 kubeadm.go:309] [bootstrap-token] Using token: vpjlvi.b3xqzdy0rkb3gdrn
	I0520 05:03:04.346288   21535 out.go:204]   - Configuring RBAC rules ...
	I0520 05:03:04.346355   21535 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 05:03:04.349297   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 05:03:04.352221   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 05:03:04.353084   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 05:03:04.353913   21535 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 05:03:04.354706   21535 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 05:03:04.358069   21535 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 05:03:04.535098   21535 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 05:03:04.751507   21535 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 05:03:04.751960   21535 kubeadm.go:309] 
	I0520 05:03:04.752069   21535 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 05:03:04.752077   21535 kubeadm.go:309] 
	I0520 05:03:04.752121   21535 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 05:03:04.752125   21535 kubeadm.go:309] 
	I0520 05:03:04.752142   21535 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 05:03:04.752243   21535 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 05:03:04.752284   21535 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 05:03:04.752306   21535 kubeadm.go:309] 
	I0520 05:03:04.752347   21535 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 05:03:04.752354   21535 kubeadm.go:309] 
	I0520 05:03:04.752374   21535 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 05:03:04.752378   21535 kubeadm.go:309] 
	I0520 05:03:04.752415   21535 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 05:03:04.752452   21535 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 05:03:04.752498   21535 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 05:03:04.752505   21535 kubeadm.go:309] 
	I0520 05:03:04.752580   21535 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 05:03:04.752633   21535 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 05:03:04.752639   21535 kubeadm.go:309] 
	I0520 05:03:04.752731   21535 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token vpjlvi.b3xqzdy0rkb3gdrn \
	I0520 05:03:04.752839   21535 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc \
	I0520 05:03:04.752855   21535 kubeadm.go:309] 	--control-plane 
	I0520 05:03:04.752863   21535 kubeadm.go:309] 
	I0520 05:03:04.752982   21535 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 05:03:04.752987   21535 kubeadm.go:309] 
	I0520 05:03:04.753023   21535 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token vpjlvi.b3xqzdy0rkb3gdrn \
	I0520 05:03:04.753082   21535 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ac1cdfdca409f4f9fdc4f52d6b2bfa1de0adce5fd40305cabc10e1e67749bdfc 
	I0520 05:03:04.753133   21535 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 05:03:04.753138   21535 cni.go:84] Creating CNI manager for ""
	I0520 05:03:04.753145   21535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:03:04.755865   21535 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 05:03:04.765869   21535 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 05:03:04.769463   21535 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 05:03:04.775023   21535 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 05:03:04.775087   21535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 05:03:04.775148   21535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-298000 minikube.k8s.io/updated_at=2024_05_20T05_03_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=76465265bbb001d7b835613165bf9ac6b2521d45 minikube.k8s.io/name=stopped-upgrade-298000 minikube.k8s.io/primary=true
	I0520 05:03:04.823728   21535 ops.go:34] apiserver oom_adj: -16
	I0520 05:03:04.823728   21535 kubeadm.go:1107] duration metric: took 48.702209ms to wait for elevateKubeSystemPrivileges
	W0520 05:03:04.823752   21535 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 05:03:04.823756   21535 kubeadm.go:393] duration metric: took 4m11.677306459s to StartCluster
	I0520 05:03:04.823767   21535 settings.go:142] acquiring lock: {Name:mkb0015ab6abb1526406adb43e2b3d4392387c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:03:04.823859   21535 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:03:04.824274   21535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/kubeconfig: {Name:mk3ada957134ebfd6ba10dc19bcfe4b23657e56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:03:04.824488   21535 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:03:04.826274   21535 out.go:177] * Verifying Kubernetes components...
	I0520 05:03:04.824531   21535 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 05:03:04.824585   21535 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:03:04.835695   21535 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-298000"
	I0520 05:03:04.835707   21535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 05:03:04.835730   21535 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-298000"
	I0520 05:03:04.835702   21535 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-298000"
	W0520 05:03:04.835740   21535 addons.go:243] addon storage-provisioner should already be in state true
	I0520 05:03:04.835756   21535 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-298000"
	I0520 05:03:04.835773   21535 host.go:66] Checking if "stopped-upgrade-298000" exists ...
	I0520 05:03:04.840814   21535 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 05:03:04.844892   21535 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:03:04.844902   21535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 05:03:04.844911   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 05:03:04.846045   21535 kapi.go:59] client config for stopped-upgrade-298000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/stopped-upgrade-298000/client.key", CAFile:"/Users/jenkins/minikube-integration/18929-19024/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10586c580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 05:03:04.846169   21535 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-298000"
	W0520 05:03:04.846176   21535 addons.go:243] addon default-storageclass should already be in state true
	I0520 05:03:04.846188   21535 host.go:66] Checking if "stopped-upgrade-298000" exists ...
	I0520 05:03:04.847187   21535 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 05:03:04.847193   21535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 05:03:04.847200   21535 sshutil.go:53] new ssh client: &{IP:localhost Port:54138 SSHKeyPath:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/stopped-upgrade-298000/id_rsa Username:docker}
	I0520 05:03:04.927874   21535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 05:03:04.932796   21535 api_server.go:52] waiting for apiserver process to appear ...
	I0520 05:03:04.932843   21535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 05:03:04.936601   21535 api_server.go:72] duration metric: took 112.100791ms to wait for apiserver process to appear ...
	I0520 05:03:04.936609   21535 api_server.go:88] waiting for apiserver healthz status ...
	I0520 05:03:04.936616   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:04.959286   21535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 05:03:04.965152   21535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 05:03:09.938674   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:09.938727   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:14.938923   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:14.938945   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:19.939228   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:19.939265   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:24.939709   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:24.939791   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:29.940358   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:29.940406   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:34.941099   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:34.941134   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 05:03:35.350875   21535 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 05:03:35.354021   21535 out.go:177] * Enabled addons: storage-provisioner
	I0520 05:03:35.365912   21535 addons.go:505] duration metric: took 30.54160525s for enable addons: enabled=[storage-provisioner]
	I0520 05:03:39.942085   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:39.942144   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:44.943488   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:44.943532   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:49.944986   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:49.945010   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:54.946842   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:54.946864   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:03:59.949012   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:03:59.949035   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:04.951233   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:04.951395   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:04.964530   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:04.964608   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:04.975957   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:04.976036   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:04.986380   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:04.986449   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:04.996524   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:04.996585   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:04:05.006587   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:04:05.006662   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:04:05.017386   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:04:05.017460   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:04:05.031317   21535 logs.go:276] 0 containers: []
	W0520 05:04:05.031329   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:04:05.031382   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:04:05.041524   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:04:05.041541   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:04:05.041546   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:04:05.045978   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:04:05.045985   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:04:05.081195   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:04:05.081208   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:04:05.092864   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:04:05.092873   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:04:05.104139   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:04:05.104151   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:04:05.137083   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:04:05.137090   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:04:05.150744   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:04:05.150753   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:04:05.164443   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:04:05.164454   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:04:05.175740   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:04:05.175752   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:04:05.190292   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:04:05.190305   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:04:05.202624   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:04:05.202637   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:04:05.220577   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:04:05.220587   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:04:05.245505   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:04:05.245515   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:04:07.759697   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:12.761932   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:12.762417   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:12.799373   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:12.799497   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:12.822035   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:12.822146   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:12.837482   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:12.837558   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:12.852526   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:12.852592   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:04:12.862792   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:04:12.862857   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:04:12.873545   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:04:12.873612   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:04:12.883855   21535 logs.go:276] 0 containers: []
	W0520 05:04:12.883866   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:04:12.883921   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:04:12.894325   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:04:12.894338   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:04:12.894344   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:04:12.898574   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:04:12.898583   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:04:12.912143   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:04:12.912155   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:04:12.923899   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:04:12.923913   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:04:12.935075   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:04:12.935089   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:04:12.946975   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:04:12.946988   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:04:12.958280   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:04:12.958294   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:04:12.981333   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:04:12.981342   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:04:12.993437   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:04:12.993448   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:04:13.029453   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:04:13.029462   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:04:13.068500   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:04:13.068513   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:04:13.083258   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:04:13.083268   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:04:13.098463   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:04:13.098477   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:04:15.618067   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:20.620648   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:20.620855   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:20.649616   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:20.649713   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:20.667621   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:20.667678   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:20.681528   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:20.681593   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:20.694180   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:20.694238   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:04:20.705494   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:04:20.705555   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:04:20.716944   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:04:20.717011   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:04:20.727450   21535 logs.go:276] 0 containers: []
	W0520 05:04:20.727462   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:04:20.727519   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:04:20.737949   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:04:20.737965   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:04:20.737970   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:04:20.742311   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:04:20.742319   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:04:20.778570   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:04:20.778580   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:04:20.793074   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:04:20.793082   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:04:20.804938   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:04:20.804949   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:04:20.817078   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:04:20.817088   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:04:20.852758   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:04:20.852767   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:04:20.866611   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:04:20.866620   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:04:20.878495   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:04:20.878504   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:04:20.889684   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:04:20.889693   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:04:20.906685   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:04:20.906699   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:04:20.917945   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:04:20.917954   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:04:20.934851   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:04:20.934861   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:04:23.460776   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:28.461818   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:28.462215   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:28.501394   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:28.501520   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:28.522859   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:28.522951   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:28.541135   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:28.541214   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:28.553598   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:28.553664   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:04:28.564307   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:04:28.564374   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:04:28.575213   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:04:28.575285   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:04:28.585708   21535 logs.go:276] 0 containers: []
	W0520 05:04:28.585719   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:04:28.585778   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:04:28.596388   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:04:28.596410   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:04:28.596415   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:04:28.608283   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:04:28.608296   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:04:28.628695   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:04:28.628707   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:04:28.653505   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:04:28.653512   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:04:28.658031   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:04:28.658040   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:04:28.693103   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:04:28.693117   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:04:28.707862   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:04:28.707873   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:04:28.723922   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:04:28.723932   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:04:28.735861   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:04:28.735872   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:04:28.750713   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:04:28.750725   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:04:28.785062   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:04:28.785070   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:04:28.799687   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:04:28.799698   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:04:28.814649   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:04:28.814660   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:04:31.328407   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:36.330574   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:36.330735   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:36.345980   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:36.346071   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:36.359395   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:36.359482   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:36.369697   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:36.369788   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:36.383912   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:36.383977   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:04:36.394886   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:04:36.394951   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:04:36.406917   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:04:36.406978   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:04:36.416821   21535 logs.go:276] 0 containers: []
	W0520 05:04:36.416831   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:04:36.416891   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:04:36.427005   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:04:36.427021   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:04:36.427027   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:04:36.440401   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:04:36.440413   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:04:36.461068   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:04:36.461079   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:04:36.472368   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:04:36.472382   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:04:36.483696   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:04:36.483708   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:04:36.495160   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:04:36.495172   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:04:36.506711   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:04:36.506724   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:04:36.542249   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:04:36.542262   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:04:36.576659   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:04:36.576673   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:04:36.591652   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:04:36.591660   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:04:36.618129   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:04:36.618141   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:04:36.642450   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:04:36.642459   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:04:36.646710   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:04:36.646718   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:04:39.162661   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:44.165507   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:44.165890   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:44.204854   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:44.204974   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:44.228036   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:44.228156   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:44.245167   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:44.245248   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:44.257233   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:44.257297   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:04:44.268475   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:04:44.268549   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:04:44.278873   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:04:44.278944   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:04:44.289080   21535 logs.go:276] 0 containers: []
	W0520 05:04:44.289089   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:04:44.289145   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:04:44.299910   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:04:44.299923   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:04:44.299927   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:04:44.334168   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:04:44.334175   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:04:44.338185   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:04:44.338194   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:04:44.353800   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:04:44.353810   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:04:44.365530   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:04:44.365542   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:04:44.380537   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:04:44.380546   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:04:44.391852   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:04:44.391863   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:04:44.415698   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:04:44.415708   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:04:44.426892   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:04:44.426904   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:04:44.460733   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:04:44.460745   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:04:44.483318   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:04:44.483329   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:04:44.494976   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:04:44.494989   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:04:44.506530   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:04:44.506538   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:04:47.024854   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:52.027021   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:52.027450   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:52.062296   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:52.062428   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:52.082218   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:52.082318   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:52.097194   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:52.097267   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:52.109668   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:52.109729   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:04:52.120473   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:04:52.120531   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:04:52.131157   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:04:52.131222   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:04:52.143396   21535 logs.go:276] 0 containers: []
	W0520 05:04:52.143410   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:04:52.143461   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:04:52.153656   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:04:52.153668   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:04:52.153673   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:04:52.165506   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:04:52.165515   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:04:52.180184   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:04:52.180198   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:04:52.192605   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:04:52.192617   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:04:52.210760   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:04:52.210771   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:04:52.222893   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:04:52.222902   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:04:52.245742   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:04:52.245752   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:04:52.249908   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:04:52.249915   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:04:52.264069   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:04:52.264082   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:04:52.275804   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:04:52.275817   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:04:52.289856   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:04:52.289868   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:04:52.305460   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:04:52.305471   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:04:52.341596   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:04:52.341606   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:04:54.902615   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:04:59.905173   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:04:59.905611   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:04:59.942740   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:04:59.942875   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:04:59.963583   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:04:59.963687   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:04:59.978811   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:04:59.978879   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:04:59.992887   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:04:59.992957   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:00.003512   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:00.003588   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:00.018005   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:00.018073   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:00.028214   21535 logs.go:276] 0 containers: []
	W0520 05:05:00.028230   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:00.028290   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:00.038529   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:00.038544   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:00.038549   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:00.054626   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:00.054636   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:00.059134   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:00.059143   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:00.073753   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:00.075880   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:00.089647   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:00.089660   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:00.101345   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:00.101357   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:00.116032   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:00.116044   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:00.133741   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:00.133752   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:00.144899   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:00.144910   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:00.180972   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:00.180981   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:00.214772   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:00.214783   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:00.226391   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:00.226403   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:00.238630   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:00.238638   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:02.763628   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:05:07.765859   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:05:07.766052   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:05:07.782510   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:05:07.782575   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:05:07.793032   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:05:07.793100   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:05:07.803439   21535 logs.go:276] 2 containers: [a5f23b37d814 ac2db1c054b3]
	I0520 05:05:07.803505   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:05:07.814579   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:05:07.814641   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:07.824741   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:07.824804   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:07.834838   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:07.834901   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:07.849112   21535 logs.go:276] 0 containers: []
	W0520 05:05:07.849125   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:07.849175   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:07.859738   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:07.859753   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:07.859758   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:07.873314   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:07.873322   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:07.887361   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:07.887370   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:07.898868   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:07.898877   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:07.910008   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:07.910019   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:07.927110   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:07.927120   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:07.962202   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:07.962209   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:07.996048   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:07.996058   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:08.008318   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:08.008331   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:08.022589   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:08.022598   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:08.044908   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:08.044919   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:08.069269   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:08.069276   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:08.081096   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:08.081106   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:10.587433   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:05:15.590098   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:05:15.590215   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:05:15.613722   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:05:15.613790   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:05:15.624470   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:05:15.624539   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:05:15.634733   21535 logs.go:276] 3 containers: [b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:05:15.634799   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:05:15.653604   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:05:15.653681   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:15.664236   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:15.664298   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:15.674912   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:15.674985   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:15.684957   21535 logs.go:276] 0 containers: []
	W0520 05:05:15.684972   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:15.685021   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:15.695334   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:15.695354   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:15.695359   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:15.707142   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:15.707152   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:15.732052   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:15.732059   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:15.743488   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:15.743501   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:15.780410   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:15.780424   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:15.807417   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:15.807430   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:15.827607   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:15.827619   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:15.841426   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:05:15.841438   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:05:15.852691   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:15.852705   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:15.886888   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:15.886897   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:15.898988   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:15.898999   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:15.916758   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:15.916767   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:15.931703   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:15.931718   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:15.946304   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:15.946314   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:18.463544   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:05:23.464484   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:05:23.464547   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:05:23.475807   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:05:23.475846   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:05:23.486293   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:05:23.486345   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:05:23.497696   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:05:23.497763   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:05:23.508465   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:05:23.508523   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:23.518727   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:23.518769   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:23.529380   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:23.529441   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:23.540467   21535 logs.go:276] 0 containers: []
	W0520 05:05:23.540480   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:23.540540   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:23.553521   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:23.553542   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:05:23.553548   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:05:23.566995   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:05:23.567008   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:05:23.579951   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:23.579964   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:23.593272   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:23.593285   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:23.612508   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:23.612521   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:23.638103   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:23.638120   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:23.681273   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:23.681285   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:23.696285   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:23.696296   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:23.711208   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:23.711219   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:23.726048   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:23.726057   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:23.737747   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:23.737757   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:23.772722   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:23.772731   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:23.784073   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:23.784086   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:23.799022   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:23.799033   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:23.810846   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:23.810857   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:26.316700   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:05:31.319180   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:05:31.319343   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:05:31.353225   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:05:31.353335   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:05:31.369119   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:05:31.369198   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:05:31.382388   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:05:31.382471   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:05:31.393141   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:05:31.393213   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:31.403265   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:31.403329   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:31.413452   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:31.413539   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:31.424527   21535 logs.go:276] 0 containers: []
	W0520 05:05:31.424537   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:31.424588   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:31.441209   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:31.441227   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:31.441232   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:31.453247   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:05:31.453256   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:05:31.468629   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:31.468640   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:31.483163   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:31.483174   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:31.495350   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:31.495362   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:31.506990   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:31.507000   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:31.540903   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:31.540910   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:31.544985   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:31.544990   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:31.556798   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:31.556812   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:31.568552   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:31.568563   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:31.592841   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:31.592847   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:31.625734   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:05:31.625743   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:05:31.637095   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:31.637110   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:31.654600   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:31.654609   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:31.669328   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:31.669342   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:34.182908   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:05:39.185199   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:05:39.186157   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:05:39.226521   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:05:39.226658   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:05:39.251225   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:05:39.251321   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:05:39.272893   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:05:39.272973   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:05:39.289228   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:05:39.289295   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:39.299756   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:39.299829   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:39.310808   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:39.310869   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:39.321689   21535 logs.go:276] 0 containers: []
	W0520 05:05:39.321700   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:39.321763   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:39.332259   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:39.332280   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:05:39.332285   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:05:39.345975   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:39.345987   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:39.362136   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:39.362149   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:39.383887   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:39.383898   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:39.395564   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:39.395573   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:39.409921   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:39.409933   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:39.422566   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:39.422575   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:39.440777   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:39.440786   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:39.445112   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:39.445121   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:39.478564   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:39.478576   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:39.491249   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:39.491262   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:39.503233   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:39.503246   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:39.526542   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:39.526552   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:39.559965   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:05:39.559974   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:05:39.571299   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:39.571312   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:42.088069   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:05:47.090429   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:05:47.090503   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:05:47.105781   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:05:47.105837   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:05:47.119535   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:05:47.119593   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:05:47.132003   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:05:47.132065   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:05:47.143037   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:05:47.143094   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:47.155351   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:47.155405   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:47.166688   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:47.166757   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:47.178436   21535 logs.go:276] 0 containers: []
	W0520 05:05:47.178447   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:47.178493   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:47.191224   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:47.191245   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:47.191252   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:47.204626   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:47.204637   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:47.220923   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:47.220932   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:47.238887   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:05:47.238900   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:05:47.251349   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:47.251362   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:47.294681   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:47.294694   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:47.314054   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:47.314063   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:47.328670   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:47.328682   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:47.353353   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:47.353364   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:47.365931   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:47.365944   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:47.370413   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:05:47.370426   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:05:47.384613   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:47.384623   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:47.397555   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:47.397569   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:47.409732   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:47.409745   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:47.444702   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:47.444714   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:49.962236   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:05:54.964849   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:05:54.965240   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:05:55.014681   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:05:55.014809   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:05:55.033327   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:05:55.033417   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:05:55.047609   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:05:55.047679   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:05:55.059557   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:05:55.059623   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:05:55.073331   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:05:55.073398   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:05:55.086253   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:05:55.086321   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:05:55.099991   21535 logs.go:276] 0 containers: []
	W0520 05:05:55.100005   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:05:55.100076   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:05:55.117832   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:05:55.117851   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:05:55.117857   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:05:55.123071   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:05:55.123084   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:05:55.138210   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:05:55.138221   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:05:55.151653   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:05:55.151664   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:05:55.187022   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:05:55.187048   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:05:55.207539   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:05:55.207553   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:05:55.221320   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:05:55.221333   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:05:55.237458   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:05:55.237467   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:05:55.254942   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:05:55.254952   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:05:55.280370   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:05:55.280378   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:05:55.292804   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:05:55.292818   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:05:55.329110   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:05:55.329121   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:05:55.344678   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:05:55.344688   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:05:55.361045   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:05:55.361055   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:05:55.375037   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:05:55.375049   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:05:57.889112   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:02.891320   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:02.891631   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:02.927355   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:02.927448   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:02.952365   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:02.952425   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:02.967575   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:02.967647   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:02.977956   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:02.978015   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:02.988485   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:02.988553   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:02.998792   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:02.998849   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:03.009519   21535 logs.go:276] 0 containers: []
	W0520 05:06:03.009533   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:03.009589   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:03.021372   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:03.021386   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:03.021391   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:03.035658   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:03.035667   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:03.048449   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:03.048465   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:03.061387   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:03.061398   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:03.096902   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:03.096912   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:03.108493   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:03.108504   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:06:03.123006   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:03.123016   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:03.134401   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:03.134413   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:03.138605   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:03.138611   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:03.156770   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:03.156781   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:03.168485   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:03.168494   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:03.186277   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:03.186286   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:03.198542   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:03.198552   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:03.232381   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:03.232389   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:03.244316   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:03.244326   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:05.769890   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:10.771702   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:10.771778   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:10.783867   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:10.783924   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:10.795269   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:10.795323   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:10.806458   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:10.806535   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:10.817815   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:10.817877   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:10.829608   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:10.829662   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:10.840429   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:10.840493   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:10.851978   21535 logs.go:276] 0 containers: []
	W0520 05:06:10.851989   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:10.852044   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:10.863922   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:10.863938   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:10.863944   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:10.884100   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:10.884111   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:10.896712   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:10.896724   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:10.901871   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:10.901880   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:10.917744   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:10.917756   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:10.930946   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:10.930958   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:10.943442   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:10.943455   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:10.956299   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:10.956313   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:10.982158   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:10.982167   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:11.017070   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:11.017081   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:11.031692   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:11.031705   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:11.050138   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:11.050151   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:11.062083   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:11.062091   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:11.100168   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:11.100179   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:11.111916   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:11.111927   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:06:13.631975   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:18.634759   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:18.635025   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:18.661450   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:18.661565   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:18.678555   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:18.678631   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:18.693451   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:18.693522   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:18.704535   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:18.704595   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:18.715431   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:18.715498   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:18.726015   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:18.726084   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:18.736588   21535 logs.go:276] 0 containers: []
	W0520 05:06:18.736602   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:18.736656   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:18.747486   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:18.747505   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:18.747510   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:18.781132   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:18.781138   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:18.792918   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:18.792928   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:18.809688   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:18.809700   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:18.825810   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:18.825821   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:18.838667   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:18.838681   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:06:18.857686   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:18.857695   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:18.881594   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:18.881599   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:18.896028   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:18.896040   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:18.911191   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:18.911201   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:18.928262   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:18.928273   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:18.940076   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:18.940088   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:18.944363   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:18.944372   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:18.979116   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:18.979124   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:18.993235   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:18.993248   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:21.507031   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:26.509854   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:26.510178   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:26.547563   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:26.547678   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:26.567794   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:26.567881   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:26.581510   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:26.581580   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:26.593260   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:26.593332   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:26.603587   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:26.603649   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:26.615786   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:26.615860   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:26.626545   21535 logs.go:276] 0 containers: []
	W0520 05:06:26.626555   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:26.626610   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:26.637175   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:26.637194   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:26.637200   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:26.652016   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:26.652028   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:26.666048   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:26.666059   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:06:26.680506   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:26.680518   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:26.692317   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:26.692328   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:26.705088   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:26.705097   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:26.709624   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:26.709632   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:26.744197   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:26.744210   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:26.779070   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:26.779079   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:26.791897   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:26.791910   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:26.804048   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:26.804062   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:26.815315   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:26.815325   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:26.826992   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:26.827004   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:26.850913   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:26.850923   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:26.862894   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:26.862907   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:29.382785   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:34.385435   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:34.385531   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:34.397455   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:34.397547   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:34.409232   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:34.409315   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:34.421914   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:34.421971   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:34.433485   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:34.433576   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:34.445336   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:34.445412   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:34.457169   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:34.457251   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:34.467931   21535 logs.go:276] 0 containers: []
	W0520 05:06:34.467942   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:34.468016   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:34.478810   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:34.478829   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:34.478834   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:34.492365   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:34.492377   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:34.512969   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:34.512981   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:34.528259   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:34.528271   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:06:34.544842   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:34.544857   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:34.557753   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:34.557764   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:34.570795   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:34.570807   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:34.582940   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:34.582952   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:34.596012   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:34.596023   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:34.610233   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:34.610246   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:34.635135   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:34.635147   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:34.670817   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:34.670834   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:34.675534   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:34.675546   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:34.713556   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:34.713569   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:34.727696   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:34.727707   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:37.251296   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:42.253937   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:42.254358   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:42.295253   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:42.295383   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:42.316854   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:42.316959   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:42.332175   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:42.332256   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:42.349538   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:42.349607   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:42.360308   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:42.360377   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:42.371813   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:42.371889   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:42.387127   21535 logs.go:276] 0 containers: []
	W0520 05:06:42.387141   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:42.387198   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:42.397572   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:42.397587   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:42.397593   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:42.438435   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:42.438448   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:42.450625   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:42.450636   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:42.484541   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:42.484548   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:42.502667   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:42.502678   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:42.514728   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:42.514742   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:42.526621   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:42.526635   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:42.532999   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:42.533009   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:42.552490   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:42.552517   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:42.571940   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:42.571948   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:42.590101   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:42.590111   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:42.602436   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:42.602447   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:42.616643   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:42.616658   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:06:42.631538   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:42.631546   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:42.642822   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:42.642836   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:45.169101   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:50.171810   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:50.172216   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:50.204566   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:50.204685   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:50.223252   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:50.223345   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:50.237415   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:50.237496   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:50.252924   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:50.252990   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:50.263629   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:50.263685   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:50.274690   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:50.274775   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:50.284459   21535 logs.go:276] 0 containers: []
	W0520 05:06:50.284471   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:50.284537   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:50.294673   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:50.294688   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:50.294692   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:50.316027   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:50.316038   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:50.327793   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:50.327804   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:50.361456   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:50.361466   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:50.365946   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:50.365952   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:50.377128   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:50.377138   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:50.413739   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:50.413753   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:50.428570   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:50.428583   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:50.442832   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:50.442842   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:50.466933   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:50.466942   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:06:50.485841   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:50.485852   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:50.497615   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:50.497626   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:50.508712   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:50.508722   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:50.522489   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:50.522498   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:50.534262   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:50.534271   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:53.047699   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:06:58.050247   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:06:58.050338   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 05:06:58.061669   21535 logs.go:276] 1 containers: [74387074f7e1]
	I0520 05:06:58.061730   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 05:06:58.077404   21535 logs.go:276] 1 containers: [b05adffa6700]
	I0520 05:06:58.077467   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 05:06:58.089112   21535 logs.go:276] 4 containers: [9686ad684fec b427683a6287 a5f23b37d814 ac2db1c054b3]
	I0520 05:06:58.089167   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 05:06:58.100072   21535 logs.go:276] 1 containers: [731a1be3ea4f]
	I0520 05:06:58.100135   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 05:06:58.111404   21535 logs.go:276] 1 containers: [c2b9dec2c10a]
	I0520 05:06:58.111452   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 05:06:58.123938   21535 logs.go:276] 1 containers: [93bbda4340ad]
	I0520 05:06:58.123998   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 05:06:58.134387   21535 logs.go:276] 0 containers: []
	W0520 05:06:58.134397   21535 logs.go:278] No container was found matching "kindnet"
	I0520 05:06:58.134450   21535 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 05:06:58.145533   21535 logs.go:276] 1 containers: [efd128cf7652]
	I0520 05:06:58.145550   21535 logs.go:123] Gathering logs for etcd [b05adffa6700] ...
	I0520 05:06:58.145556   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b05adffa6700"
	I0520 05:06:58.160727   21535 logs.go:123] Gathering logs for coredns [b427683a6287] ...
	I0520 05:06:58.160741   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b427683a6287"
	I0520 05:06:58.173628   21535 logs.go:123] Gathering logs for kube-proxy [c2b9dec2c10a] ...
	I0520 05:06:58.173637   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2b9dec2c10a"
	I0520 05:06:58.189455   21535 logs.go:123] Gathering logs for kube-controller-manager [93bbda4340ad] ...
	I0520 05:06:58.189467   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93bbda4340ad"
	I0520 05:06:58.207927   21535 logs.go:123] Gathering logs for Docker ...
	I0520 05:06:58.207940   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 05:06:58.233619   21535 logs.go:123] Gathering logs for kubelet ...
	I0520 05:06:58.233632   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 05:06:58.270704   21535 logs.go:123] Gathering logs for describe nodes ...
	I0520 05:06:58.270719   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 05:06:58.311995   21535 logs.go:123] Gathering logs for kube-apiserver [74387074f7e1] ...
	I0520 05:06:58.312007   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74387074f7e1"
	I0520 05:06:58.329497   21535 logs.go:123] Gathering logs for coredns [ac2db1c054b3] ...
	I0520 05:06:58.329510   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2db1c054b3"
	I0520 05:06:58.343292   21535 logs.go:123] Gathering logs for storage-provisioner [efd128cf7652] ...
	I0520 05:06:58.343304   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efd128cf7652"
	I0520 05:06:58.355985   21535 logs.go:123] Gathering logs for container status ...
	I0520 05:06:58.356000   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 05:06:58.369005   21535 logs.go:123] Gathering logs for dmesg ...
	I0520 05:06:58.369016   21535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 05:06:58.374051   21535 logs.go:123] Gathering logs for coredns [9686ad684fec] ...
	I0520 05:06:58.374058   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9686ad684fec"
	I0520 05:06:58.386557   21535 logs.go:123] Gathering logs for coredns [a5f23b37d814] ...
	I0520 05:06:58.386569   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5f23b37d814"
	I0520 05:06:58.399103   21535 logs.go:123] Gathering logs for kube-scheduler [731a1be3ea4f] ...
	I0520 05:06:58.399115   21535 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 731a1be3ea4f"
	I0520 05:07:00.916506   21535 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 05:07:05.919123   21535 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 05:07:05.922548   21535 out.go:177] 
	W0520 05:07:05.926336   21535 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 05:07:05.926344   21535 out.go:239] * 
	* 
	W0520 05:07:05.926789   21535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:05.941333   21535 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-298000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.43s)

                                                
                                    
x
+
TestPause/serial/Start (9.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-879000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-879000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.919107083s)

                                                
                                                
-- stdout --
	* [pause-879000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-879000" primary control-plane node in "pause-879000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-879000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-879000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-879000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-879000 -n pause-879000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-879000 -n pause-879000: exit status 7 (66.483375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-879000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-384000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-384000 --driver=qemu2 : exit status 80 (9.786703334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-384000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-384000" primary control-plane node in "NoKubernetes-384000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-384000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000: exit status 7 (64.770791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --driver=qemu2 : exit status 80 (5.236194833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-384000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-384000
	* Restarting existing qemu2 VM for "NoKubernetes-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000: exit status 7 (44.501084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --driver=qemu2 : exit status 80 (5.23569975s)

                                                
                                                
-- stdout --
	* [NoKubernetes-384000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-384000
	* Restarting existing qemu2 VM for "NoKubernetes-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000: exit status 7 (64.947458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-384000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-384000 --driver=qemu2 : exit status 80 (5.2866965s)

                                                
                                                
-- stdout --
	* [NoKubernetes-384000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-384000
	* Restarting existing qemu2 VM for "NoKubernetes-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-384000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-384000 -n NoKubernetes-384000: exit status 7 (52.234542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.704551083s)

                                                
                                                
-- stdout --
	* [auto-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-458000" primary control-plane node in "auto-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:05:11.576260   21769 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:05:11.576386   21769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:11.576390   21769 out.go:304] Setting ErrFile to fd 2...
	I0520 05:05:11.576392   21769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:11.576530   21769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:05:11.577570   21769 out.go:298] Setting JSON to false
	I0520 05:05:11.594103   21769 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11082,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:05:11.594219   21769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:05:11.599103   21769 out.go:177] * [auto-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:05:11.606084   21769 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:05:11.606155   21769 notify.go:220] Checking for updates...
	I0520 05:05:11.610116   21769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:05:11.613127   21769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:05:11.616029   21769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:05:11.619077   21769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:05:11.622134   21769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:05:11.623709   21769 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:05:11.623776   21769 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:05:11.623825   21769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:05:11.628046   21769 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:05:11.634858   21769 start.go:297] selected driver: qemu2
	I0520 05:05:11.634864   21769 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:05:11.634870   21769 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:05:11.637120   21769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:05:11.640106   21769 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:05:11.643192   21769 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:05:11.643210   21769 cni.go:84] Creating CNI manager for ""
	I0520 05:05:11.643219   21769 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:05:11.643223   21769 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 05:05:11.643256   21769 start.go:340] cluster config:
	{Name:auto-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:05:11.647512   21769 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:05:11.655053   21769 out.go:177] * Starting "auto-458000" primary control-plane node in "auto-458000" cluster
	I0520 05:05:11.659115   21769 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:05:11.659130   21769 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:05:11.659140   21769 cache.go:56] Caching tarball of preloaded images
	I0520 05:05:11.659197   21769 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:05:11.659207   21769 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:05:11.659271   21769 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/auto-458000/config.json ...
	I0520 05:05:11.659282   21769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/auto-458000/config.json: {Name:mk1a93628b99fb2d36f1b7c55c9f668f4be3f381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:05:11.659581   21769 start.go:360] acquireMachinesLock for auto-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:11.659610   21769 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "auto-458000"
	I0520 05:05:11.659620   21769 start.go:93] Provisioning new machine with config: &{Name:auto-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:11.659651   21769 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:11.664094   21769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:11.679041   21769 start.go:159] libmachine.API.Create for "auto-458000" (driver="qemu2")
	I0520 05:05:11.679074   21769 client.go:168] LocalClient.Create starting
	I0520 05:05:11.679126   21769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:11.679158   21769 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:11.679169   21769 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:11.679209   21769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:11.679231   21769 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:11.679238   21769 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:11.679652   21769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:11.816288   21769 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:11.857609   21769 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:11.857615   21769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:11.857796   21769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2
	I0520 05:05:11.870505   21769 main.go:141] libmachine: STDOUT: 
	I0520 05:05:11.870532   21769 main.go:141] libmachine: STDERR: 
	I0520 05:05:11.870589   21769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2 +20000M
	I0520 05:05:11.881580   21769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:11.881598   21769 main.go:141] libmachine: STDERR: 
	I0520 05:05:11.881617   21769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2
	I0520 05:05:11.881624   21769 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:11.881653   21769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:7f:4b:d0:26:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2
	I0520 05:05:11.883370   21769 main.go:141] libmachine: STDOUT: 
	I0520 05:05:11.883398   21769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:11.883418   21769 client.go:171] duration metric: took 204.341167ms to LocalClient.Create
	I0520 05:05:13.885625   21769 start.go:128] duration metric: took 2.225958209s to createHost
	I0520 05:05:13.885715   21769 start.go:83] releasing machines lock for "auto-458000", held for 2.226111791s
	W0520 05:05:13.885766   21769 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:13.901259   21769 out.go:177] * Deleting "auto-458000" in qemu2 ...
	W0520 05:05:13.926159   21769 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:13.926222   21769 start.go:728] Will try again in 5 seconds ...
	I0520 05:05:18.928452   21769 start.go:360] acquireMachinesLock for auto-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:18.929008   21769 start.go:364] duration metric: took 454.167µs to acquireMachinesLock for "auto-458000"
	I0520 05:05:18.929088   21769 start.go:93] Provisioning new machine with config: &{Name:auto-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:18.929396   21769 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:18.938013   21769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:18.976504   21769 start.go:159] libmachine.API.Create for "auto-458000" (driver="qemu2")
	I0520 05:05:18.976561   21769 client.go:168] LocalClient.Create starting
	I0520 05:05:18.976673   21769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:18.976754   21769 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:18.976772   21769 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:18.976833   21769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:18.976873   21769 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:18.976889   21769 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:18.977366   21769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:19.123245   21769 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:19.186158   21769 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:19.186168   21769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:19.186350   21769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2
	I0520 05:05:19.199211   21769 main.go:141] libmachine: STDOUT: 
	I0520 05:05:19.199230   21769 main.go:141] libmachine: STDERR: 
	I0520 05:05:19.199284   21769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2 +20000M
	I0520 05:05:19.210262   21769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:19.210278   21769 main.go:141] libmachine: STDERR: 
	I0520 05:05:19.210289   21769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2
	I0520 05:05:19.210294   21769 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:19.210345   21769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:a4:2c:9c:ba:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/auto-458000/disk.qcow2
	I0520 05:05:19.212097   21769 main.go:141] libmachine: STDOUT: 
	I0520 05:05:19.212112   21769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:19.212124   21769 client.go:171] duration metric: took 235.556083ms to LocalClient.Create
	I0520 05:05:21.214333   21769 start.go:128] duration metric: took 2.284907292s to createHost
	I0520 05:05:21.214435   21769 start.go:83] releasing machines lock for "auto-458000", held for 2.285422667s
	W0520 05:05:21.214853   21769 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:21.224519   21769 out.go:177] 
	W0520 05:05:21.228562   21769 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:05:21.228589   21769 out.go:239] * 
	* 
	W0520 05:05:21.231243   21769 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:05:21.240465   21769 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.746827041s)

                                                
                                                
-- stdout --
	* [calico-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-458000" primary control-plane node in "calico-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:05:23.423807   21879 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:05:23.423944   21879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:23.423947   21879 out.go:304] Setting ErrFile to fd 2...
	I0520 05:05:23.423949   21879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:23.424078   21879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:05:23.425151   21879 out.go:298] Setting JSON to false
	I0520 05:05:23.441813   21879 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11094,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:05:23.441880   21879 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:05:23.448433   21879 out.go:177] * [calico-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:05:23.456500   21879 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:05:23.460386   21879 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:05:23.456557   21879 notify.go:220] Checking for updates...
	I0520 05:05:23.464412   21879 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:05:23.465677   21879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:05:23.468361   21879 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:05:23.471398   21879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:05:23.474805   21879 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:05:23.474879   21879 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:05:23.474932   21879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:05:23.478374   21879 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:05:23.485417   21879 start.go:297] selected driver: qemu2
	I0520 05:05:23.485430   21879 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:05:23.485439   21879 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:05:23.487834   21879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:05:23.490385   21879 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:05:23.493470   21879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:05:23.493487   21879 cni.go:84] Creating CNI manager for "calico"
	I0520 05:05:23.493491   21879 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0520 05:05:23.493540   21879 start.go:340] cluster config:
	{Name:calico-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:calico-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:05:23.498496   21879 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:05:23.505341   21879 out.go:177] * Starting "calico-458000" primary control-plane node in "calico-458000" cluster
	I0520 05:05:23.509403   21879 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:05:23.509424   21879 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:05:23.509437   21879 cache.go:56] Caching tarball of preloaded images
	I0520 05:05:23.509518   21879 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:05:23.509524   21879 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:05:23.509591   21879 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/calico-458000/config.json ...
	I0520 05:05:23.509603   21879 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/calico-458000/config.json: {Name:mk3c89f48a701799cec069aa9783a0ab324e5f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:05:23.509901   21879 start.go:360] acquireMachinesLock for calico-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:23.509934   21879 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "calico-458000"
	I0520 05:05:23.509946   21879 start.go:93] Provisioning new machine with config: &{Name:calico-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:23.510003   21879 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:23.518395   21879 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:23.534635   21879 start.go:159] libmachine.API.Create for "calico-458000" (driver="qemu2")
	I0520 05:05:23.534676   21879 client.go:168] LocalClient.Create starting
	I0520 05:05:23.534750   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:23.534784   21879 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:23.534797   21879 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:23.534843   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:23.534866   21879 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:23.534875   21879 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:23.535306   21879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:23.676509   21879 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:23.740191   21879 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:23.740202   21879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:23.740425   21879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2
	I0520 05:05:23.753827   21879 main.go:141] libmachine: STDOUT: 
	I0520 05:05:23.753852   21879 main.go:141] libmachine: STDERR: 
	I0520 05:05:23.753915   21879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2 +20000M
	I0520 05:05:23.765791   21879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:23.765825   21879 main.go:141] libmachine: STDERR: 
	I0520 05:05:23.765844   21879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2
	I0520 05:05:23.765849   21879 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:23.765882   21879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:4a:57:0c:0e:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2
	I0520 05:05:23.767854   21879 main.go:141] libmachine: STDOUT: 
	I0520 05:05:23.767869   21879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:23.767888   21879 client.go:171] duration metric: took 233.209083ms to LocalClient.Create
	I0520 05:05:25.770010   21879 start.go:128] duration metric: took 2.259993792s to createHost
	I0520 05:05:25.770097   21879 start.go:83] releasing machines lock for "calico-458000", held for 2.260169375s
	W0520 05:05:25.770254   21879 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:25.781505   21879 out.go:177] * Deleting "calico-458000" in qemu2 ...
	W0520 05:05:25.808852   21879 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:25.808892   21879 start.go:728] Will try again in 5 seconds ...
	I0520 05:05:30.809911   21879 start.go:360] acquireMachinesLock for calico-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:30.810465   21879 start.go:364] duration metric: took 410.209µs to acquireMachinesLock for "calico-458000"
	I0520 05:05:30.810587   21879 start.go:93] Provisioning new machine with config: &{Name:calico-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:30.810854   21879 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:30.814179   21879 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:30.854532   21879 start.go:159] libmachine.API.Create for "calico-458000" (driver="qemu2")
	I0520 05:05:30.854583   21879 client.go:168] LocalClient.Create starting
	I0520 05:05:30.854680   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:30.854743   21879 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:30.854761   21879 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:30.854833   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:30.854872   21879 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:30.854890   21879 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:30.855471   21879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:31.001291   21879 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:31.077395   21879 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:31.077401   21879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:31.077585   21879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2
	I0520 05:05:31.090391   21879 main.go:141] libmachine: STDOUT: 
	I0520 05:05:31.090413   21879 main.go:141] libmachine: STDERR: 
	I0520 05:05:31.090465   21879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2 +20000M
	I0520 05:05:31.101453   21879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:31.101471   21879 main.go:141] libmachine: STDERR: 
	I0520 05:05:31.101484   21879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2
	I0520 05:05:31.101489   21879 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:31.101526   21879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:82:1b:4b:35:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/calico-458000/disk.qcow2
	I0520 05:05:31.103285   21879 main.go:141] libmachine: STDOUT: 
	I0520 05:05:31.103305   21879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:31.103318   21879 client.go:171] duration metric: took 248.731791ms to LocalClient.Create
	I0520 05:05:33.105463   21879 start.go:128] duration metric: took 2.2945845s to createHost
	I0520 05:05:33.105501   21879 start.go:83] releasing machines lock for "calico-458000", held for 2.295039042s
	W0520 05:05:33.105678   21879 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:33.116098   21879 out.go:177] 
	W0520 05:05:33.121157   21879 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:05:33.121177   21879 out.go:239] * 
	* 
	W0520 05:05:33.122100   21879 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:05:33.134058   21879 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.83224225s)

                                                
                                                
-- stdout --
	* [custom-flannel-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-458000" primary control-plane node in "custom-flannel-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:05:35.468036   21997 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:05:35.468186   21997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:35.468189   21997 out.go:304] Setting ErrFile to fd 2...
	I0520 05:05:35.468191   21997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:35.468304   21997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:05:35.469356   21997 out.go:298] Setting JSON to false
	I0520 05:05:35.485879   21997 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11106,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:05:35.485947   21997 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:05:35.492735   21997 out.go:177] * [custom-flannel-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:05:35.500678   21997 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:05:35.504749   21997 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:05:35.500733   21997 notify.go:220] Checking for updates...
	I0520 05:05:35.508615   21997 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:05:35.515574   21997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:05:35.518650   21997 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:05:35.522700   21997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:05:35.526079   21997 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:05:35.526145   21997 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:05:35.526192   21997 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:05:35.530628   21997 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:05:35.537679   21997 start.go:297] selected driver: qemu2
	I0520 05:05:35.537687   21997 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:05:35.537694   21997 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:05:35.540071   21997 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:05:35.542616   21997 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:05:35.545808   21997 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:05:35.545829   21997 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0520 05:05:35.545846   21997 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0520 05:05:35.545886   21997 start.go:340] cluster config:
	{Name:custom-flannel-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:05:35.550385   21997 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:05:35.557671   21997 out.go:177] * Starting "custom-flannel-458000" primary control-plane node in "custom-flannel-458000" cluster
	I0520 05:05:35.561661   21997 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:05:35.561673   21997 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:05:35.561683   21997 cache.go:56] Caching tarball of preloaded images
	I0520 05:05:35.561733   21997 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:05:35.561738   21997 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:05:35.561800   21997 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/custom-flannel-458000/config.json ...
	I0520 05:05:35.561810   21997 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/custom-flannel-458000/config.json: {Name:mk5651924e141c13898d2d4a09b0a59aa7eea4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:05:35.562018   21997 start.go:360] acquireMachinesLock for custom-flannel-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:35.562051   21997 start.go:364] duration metric: took 26.791µs to acquireMachinesLock for "custom-flannel-458000"
	I0520 05:05:35.562065   21997 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:35.562091   21997 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:35.566690   21997 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:35.582697   21997 start.go:159] libmachine.API.Create for "custom-flannel-458000" (driver="qemu2")
	I0520 05:05:35.582720   21997 client.go:168] LocalClient.Create starting
	I0520 05:05:35.582776   21997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:35.582805   21997 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:35.582814   21997 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:35.582856   21997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:35.582879   21997 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:35.582886   21997 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:35.583244   21997 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:35.723863   21997 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:35.824699   21997 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:35.824705   21997 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:35.824897   21997 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2
	I0520 05:05:35.837833   21997 main.go:141] libmachine: STDOUT: 
	I0520 05:05:35.837852   21997 main.go:141] libmachine: STDERR: 
	I0520 05:05:35.837899   21997 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2 +20000M
	I0520 05:05:35.849371   21997 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:35.849385   21997 main.go:141] libmachine: STDERR: 
	I0520 05:05:35.849394   21997 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2
	I0520 05:05:35.849397   21997 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:35.849425   21997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a9:66:44:63:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2
	I0520 05:05:35.851340   21997 main.go:141] libmachine: STDOUT: 
	I0520 05:05:35.851356   21997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:35.851375   21997 client.go:171] duration metric: took 268.652625ms to LocalClient.Create
	I0520 05:05:37.853471   21997 start.go:128] duration metric: took 2.291380083s to createHost
	I0520 05:05:37.853506   21997 start.go:83] releasing machines lock for "custom-flannel-458000", held for 2.291465458s
	W0520 05:05:37.853548   21997 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:37.865744   21997 out.go:177] * Deleting "custom-flannel-458000" in qemu2 ...
	W0520 05:05:37.882390   21997 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:37.882405   21997 start.go:728] Will try again in 5 seconds ...
	I0520 05:05:42.884557   21997 start.go:360] acquireMachinesLock for custom-flannel-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:42.884705   21997 start.go:364] duration metric: took 111.917µs to acquireMachinesLock for "custom-flannel-458000"
	I0520 05:05:42.884738   21997 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:42.884808   21997 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:42.895622   21997 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:42.913144   21997 start.go:159] libmachine.API.Create for "custom-flannel-458000" (driver="qemu2")
	I0520 05:05:42.913170   21997 client.go:168] LocalClient.Create starting
	I0520 05:05:42.913257   21997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:42.913295   21997 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:42.913304   21997 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:42.913347   21997 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:42.913378   21997 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:42.913384   21997 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:42.913693   21997 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:43.047757   21997 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:43.211004   21997 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:43.211011   21997 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:43.211237   21997 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2
	I0520 05:05:43.224360   21997 main.go:141] libmachine: STDOUT: 
	I0520 05:05:43.224381   21997 main.go:141] libmachine: STDERR: 
	I0520 05:05:43.224442   21997 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2 +20000M
	I0520 05:05:43.235513   21997 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:43.235530   21997 main.go:141] libmachine: STDERR: 
	I0520 05:05:43.235543   21997 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2
	I0520 05:05:43.235548   21997 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:43.235589   21997 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:46:7c:12:f1:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/custom-flannel-458000/disk.qcow2
	I0520 05:05:43.237413   21997 main.go:141] libmachine: STDOUT: 
	I0520 05:05:43.237433   21997 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:43.237445   21997 client.go:171] duration metric: took 324.274792ms to LocalClient.Create
	I0520 05:05:45.237855   21997 start.go:128] duration metric: took 2.353045833s to createHost
	I0520 05:05:45.237876   21997 start.go:83] releasing machines lock for "custom-flannel-458000", held for 2.35318125s
	W0520 05:05:45.238010   21997 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:45.247410   21997 out.go:177] 
	W0520 05:05:45.253381   21997 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:05:45.253395   21997 out.go:239] * 
	* 
	W0520 05:05:45.254356   21997 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:05:45.264376   21997 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.859033041s)

                                                
                                                
-- stdout --
	* [false-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-458000" primary control-plane node in "false-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:05:47.639937   22117 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:05:47.640075   22117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:47.640078   22117 out.go:304] Setting ErrFile to fd 2...
	I0520 05:05:47.640080   22117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:47.640205   22117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:05:47.641302   22117 out.go:298] Setting JSON to false
	I0520 05:05:47.657831   22117 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11118,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:05:47.657934   22117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:05:47.663565   22117 out.go:177] * [false-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:05:47.668618   22117 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:05:47.668710   22117 notify.go:220] Checking for updates...
	I0520 05:05:47.672548   22117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:05:47.675527   22117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:05:47.678524   22117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:05:47.682546   22117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:05:47.685512   22117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:05:47.688927   22117 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:05:47.688992   22117 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:05:47.689038   22117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:05:47.693515   22117 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:05:47.700564   22117 start.go:297] selected driver: qemu2
	I0520 05:05:47.700572   22117 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:05:47.700580   22117 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:05:47.702795   22117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:05:47.706563   22117 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:05:47.709561   22117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:05:47.709573   22117 cni.go:84] Creating CNI manager for "false"
	I0520 05:05:47.709603   22117 start.go:340] cluster config:
	{Name:false-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:false-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:05:47.713843   22117 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:05:47.720562   22117 out.go:177] * Starting "false-458000" primary control-plane node in "false-458000" cluster
	I0520 05:05:47.724525   22117 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:05:47.724542   22117 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:05:47.724552   22117 cache.go:56] Caching tarball of preloaded images
	I0520 05:05:47.724606   22117 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:05:47.724612   22117 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:05:47.724680   22117 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/false-458000/config.json ...
	I0520 05:05:47.724691   22117 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/false-458000/config.json: {Name:mk845d568d8e33dd76b05c4717ed8974b9bf21c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:05:47.724897   22117 start.go:360] acquireMachinesLock for false-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:47.724926   22117 start.go:364] duration metric: took 24.709µs to acquireMachinesLock for "false-458000"
	I0520 05:05:47.724938   22117 start.go:93] Provisioning new machine with config: &{Name:false-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:47.724972   22117 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:47.733534   22117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:47.748388   22117 start.go:159] libmachine.API.Create for "false-458000" (driver="qemu2")
	I0520 05:05:47.748417   22117 client.go:168] LocalClient.Create starting
	I0520 05:05:47.748476   22117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:47.748507   22117 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:47.748517   22117 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:47.748557   22117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:47.748579   22117 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:47.748589   22117 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:47.748928   22117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:47.888785   22117 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:47.991415   22117 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:47.991423   22117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:47.991627   22117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2
	I0520 05:05:48.004902   22117 main.go:141] libmachine: STDOUT: 
	I0520 05:05:48.004920   22117 main.go:141] libmachine: STDERR: 
	I0520 05:05:48.004981   22117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2 +20000M
	I0520 05:05:48.016025   22117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:48.016042   22117 main.go:141] libmachine: STDERR: 
	I0520 05:05:48.016058   22117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2
	I0520 05:05:48.016062   22117 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:48.016096   22117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:59:ac:d1:ce:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2
	I0520 05:05:48.017904   22117 main.go:141] libmachine: STDOUT: 
	I0520 05:05:48.017922   22117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:48.017944   22117 client.go:171] duration metric: took 269.524625ms to LocalClient.Create
	I0520 05:05:50.020036   22117 start.go:128] duration metric: took 2.295066958s to createHost
	I0520 05:05:50.020094   22117 start.go:83] releasing machines lock for "false-458000", held for 2.295178042s
	W0520 05:05:50.020126   22117 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:50.031310   22117 out.go:177] * Deleting "false-458000" in qemu2 ...
	W0520 05:05:50.050018   22117 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:50.050034   22117 start.go:728] Will try again in 5 seconds ...
	I0520 05:05:55.052142   22117 start.go:360] acquireMachinesLock for false-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:55.052249   22117 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "false-458000"
	I0520 05:05:55.052262   22117 start.go:93] Provisioning new machine with config: &{Name:false-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:55.052309   22117 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:55.061519   22117 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:55.077832   22117 start.go:159] libmachine.API.Create for "false-458000" (driver="qemu2")
	I0520 05:05:55.077872   22117 client.go:168] LocalClient.Create starting
	I0520 05:05:55.077954   22117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:55.077992   22117 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:55.078001   22117 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:55.078034   22117 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:55.078058   22117 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:55.078064   22117 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:55.078379   22117 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:55.229597   22117 main.go:141] libmachine: Creating SSH key...
	I0520 05:05:55.404409   22117 main.go:141] libmachine: Creating Disk image...
	I0520 05:05:55.404420   22117 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:05:55.404625   22117 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2
	I0520 05:05:55.417820   22117 main.go:141] libmachine: STDOUT: 
	I0520 05:05:55.417852   22117 main.go:141] libmachine: STDERR: 
	I0520 05:05:55.417904   22117 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2 +20000M
	I0520 05:05:55.429077   22117 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:05:55.429095   22117 main.go:141] libmachine: STDERR: 
	I0520 05:05:55.429109   22117 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2
	I0520 05:05:55.429115   22117 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:05:55.429157   22117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:57:71:7f:c2:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/false-458000/disk.qcow2
	I0520 05:05:55.430979   22117 main.go:141] libmachine: STDOUT: 
	I0520 05:05:55.430994   22117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:05:55.431008   22117 client.go:171] duration metric: took 353.134083ms to LocalClient.Create
	I0520 05:05:57.433202   22117 start.go:128] duration metric: took 2.38088125s to createHost
	I0520 05:05:57.433274   22117 start.go:83] releasing machines lock for "false-458000", held for 2.3810335s
	W0520 05:05:57.433627   22117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:05:57.443210   22117 out.go:177] 
	W0520 05:05:57.446314   22117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:05:57.446332   22117 out.go:239] * 
	* 
	W0520 05:05:57.447888   22117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:05:57.458045   22117 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.893434791s)

                                                
                                                
-- stdout --
	* [kindnet-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-458000" primary control-plane node in "kindnet-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:05:59.652944   22227 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:05:59.653084   22227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:59.653088   22227 out.go:304] Setting ErrFile to fd 2...
	I0520 05:05:59.653091   22227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:05:59.653206   22227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:05:59.654293   22227 out.go:298] Setting JSON to false
	I0520 05:05:59.670554   22227 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11130,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:05:59.670625   22227 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:05:59.676029   22227 out.go:177] * [kindnet-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:05:59.684207   22227 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:05:59.688156   22227 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:05:59.684306   22227 notify.go:220] Checking for updates...
	I0520 05:05:59.694163   22227 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:05:59.697136   22227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:05:59.700137   22227 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:05:59.703214   22227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:05:59.705059   22227 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:05:59.705124   22227 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:05:59.705176   22227 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:05:59.709199   22227 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:05:59.715992   22227 start.go:297] selected driver: qemu2
	I0520 05:05:59.715998   22227 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:05:59.716004   22227 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:05:59.718225   22227 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:05:59.722147   22227 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:05:59.725218   22227 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:05:59.725231   22227 cni.go:84] Creating CNI manager for "kindnet"
	I0520 05:05:59.725233   22227 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 05:05:59.725263   22227 start.go:340] cluster config:
	{Name:kindnet-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:05:59.729504   22227 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:05:59.736137   22227 out.go:177] * Starting "kindnet-458000" primary control-plane node in "kindnet-458000" cluster
	I0520 05:05:59.740191   22227 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:05:59.740207   22227 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:05:59.740220   22227 cache.go:56] Caching tarball of preloaded images
	I0520 05:05:59.740282   22227 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:05:59.740287   22227 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:05:59.740353   22227 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/kindnet-458000/config.json ...
	I0520 05:05:59.740365   22227 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/kindnet-458000/config.json: {Name:mkd1f89f75d9ba9f2e078960a24289353ea0128e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:05:59.740567   22227 start.go:360] acquireMachinesLock for kindnet-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:05:59.740598   22227 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "kindnet-458000"
	I0520 05:05:59.740609   22227 start.go:93] Provisioning new machine with config: &{Name:kindnet-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:05:59.740632   22227 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:05:59.749201   22227 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:05:59.764134   22227 start.go:159] libmachine.API.Create for "kindnet-458000" (driver="qemu2")
	I0520 05:05:59.764162   22227 client.go:168] LocalClient.Create starting
	I0520 05:05:59.764227   22227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:05:59.764256   22227 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:59.764265   22227 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:59.764300   22227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:05:59.764322   22227 main.go:141] libmachine: Decoding PEM data...
	I0520 05:05:59.764335   22227 main.go:141] libmachine: Parsing certificate...
	I0520 05:05:59.764752   22227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:05:59.902191   22227 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:00.078196   22227 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:00.078207   22227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:00.078382   22227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2
	I0520 05:06:00.091254   22227 main.go:141] libmachine: STDOUT: 
	I0520 05:06:00.091273   22227 main.go:141] libmachine: STDERR: 
	I0520 05:06:00.091334   22227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2 +20000M
	I0520 05:06:00.102372   22227 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:00.102393   22227 main.go:141] libmachine: STDERR: 
	I0520 05:06:00.102438   22227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2
	I0520 05:06:00.102443   22227 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:00.102476   22227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:c3:19:f7:07:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2
	I0520 05:06:00.104187   22227 main.go:141] libmachine: STDOUT: 
	I0520 05:06:00.104202   22227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:00.104220   22227 client.go:171] duration metric: took 340.054042ms to LocalClient.Create
	I0520 05:06:02.106482   22227 start.go:128] duration metric: took 2.365832042s to createHost
	I0520 05:06:02.106569   22227 start.go:83] releasing machines lock for "kindnet-458000", held for 2.365979625s
	W0520 05:06:02.106621   22227 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:02.116185   22227 out.go:177] * Deleting "kindnet-458000" in qemu2 ...
	W0520 05:06:02.141768   22227 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:02.141801   22227 start.go:728] Will try again in 5 seconds ...
	I0520 05:06:07.144056   22227 start.go:360] acquireMachinesLock for kindnet-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:07.144738   22227 start.go:364] duration metric: took 438.084µs to acquireMachinesLock for "kindnet-458000"
	I0520 05:06:07.144821   22227 start.go:93] Provisioning new machine with config: &{Name:kindnet-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:07.145142   22227 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:07.155811   22227 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:07.207671   22227 start.go:159] libmachine.API.Create for "kindnet-458000" (driver="qemu2")
	I0520 05:06:07.207739   22227 client.go:168] LocalClient.Create starting
	I0520 05:06:07.207859   22227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:07.207924   22227 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:07.207939   22227 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:07.208020   22227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:07.208066   22227 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:07.208075   22227 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:07.208638   22227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:07.357887   22227 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:07.446227   22227 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:07.446234   22227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:07.446433   22227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2
	I0520 05:06:07.459200   22227 main.go:141] libmachine: STDOUT: 
	I0520 05:06:07.459233   22227 main.go:141] libmachine: STDERR: 
	I0520 05:06:07.459289   22227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2 +20000M
	I0520 05:06:07.470232   22227 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:07.470250   22227 main.go:141] libmachine: STDERR: 
	I0520 05:06:07.470260   22227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2
	I0520 05:06:07.470265   22227 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:07.470309   22227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:ca:6b:ac:50:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kindnet-458000/disk.qcow2
	I0520 05:06:07.472082   22227 main.go:141] libmachine: STDOUT: 
	I0520 05:06:07.472098   22227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:07.472112   22227 client.go:171] duration metric: took 264.369875ms to LocalClient.Create
	I0520 05:06:09.474327   22227 start.go:128] duration metric: took 2.329153166s to createHost
	I0520 05:06:09.474445   22227 start.go:83] releasing machines lock for "kindnet-458000", held for 2.329699708s
	W0520 05:06:09.474859   22227 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:09.488564   22227 out.go:177] 
	W0520 05:06:09.492640   22227 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:06:09.492717   22227 out.go:239] * 
	* 
	W0520 05:06:09.495699   22227 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:09.505606   22227 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.872008333s)

                                                
                                                
-- stdout --
	* [flannel-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-458000" primary control-plane node in "flannel-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:06:11.824958   22344 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:06:11.825093   22344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:11.825097   22344 out.go:304] Setting ErrFile to fd 2...
	I0520 05:06:11.825099   22344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:11.825302   22344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:06:11.826357   22344 out.go:298] Setting JSON to false
	I0520 05:06:11.842859   22344 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11142,"bootTime":1716195629,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:06:11.842927   22344 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:06:11.848280   22344 out.go:177] * [flannel-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:06:11.854166   22344 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:06:11.854238   22344 notify.go:220] Checking for updates...
	I0520 05:06:11.858197   22344 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:06:11.862166   22344 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:06:11.865189   22344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:06:11.868206   22344 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:06:11.871129   22344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:06:11.874462   22344 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:06:11.874531   22344 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:06:11.874574   22344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:06:11.879171   22344 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:06:11.886192   22344 start.go:297] selected driver: qemu2
	I0520 05:06:11.886198   22344 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:06:11.886203   22344 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:06:11.888372   22344 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:06:11.892174   22344 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:06:11.895266   22344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:06:11.895287   22344 cni.go:84] Creating CNI manager for "flannel"
	I0520 05:06:11.895291   22344 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0520 05:06:11.895336   22344 start.go:340] cluster config:
	{Name:flannel-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:flannel-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:06:11.899522   22344 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:06:11.906143   22344 out.go:177] * Starting "flannel-458000" primary control-plane node in "flannel-458000" cluster
	I0520 05:06:11.910126   22344 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:06:11.910138   22344 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:06:11.910145   22344 cache.go:56] Caching tarball of preloaded images
	I0520 05:06:11.910193   22344 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:06:11.910198   22344 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:06:11.910245   22344 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/flannel-458000/config.json ...
	I0520 05:06:11.910256   22344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/flannel-458000/config.json: {Name:mk4440c8a9b166699b9ffdbd2a0676ca4fe4183d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:06:11.910453   22344 start.go:360] acquireMachinesLock for flannel-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:11.910483   22344 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "flannel-458000"
	I0520 05:06:11.910494   22344 start.go:93] Provisioning new machine with config: &{Name:flannel-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:11.910522   22344 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:11.919196   22344 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:11.934220   22344 start.go:159] libmachine.API.Create for "flannel-458000" (driver="qemu2")
	I0520 05:06:11.934248   22344 client.go:168] LocalClient.Create starting
	I0520 05:06:11.934314   22344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:11.934349   22344 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:11.934362   22344 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:11.934406   22344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:11.934428   22344 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:11.934442   22344 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:11.934795   22344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:12.072145   22344 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:12.157254   22344 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:12.157262   22344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:12.157447   22344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2
	I0520 05:06:12.169834   22344 main.go:141] libmachine: STDOUT: 
	I0520 05:06:12.169855   22344 main.go:141] libmachine: STDERR: 
	I0520 05:06:12.169917   22344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2 +20000M
	I0520 05:06:12.180978   22344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:12.181003   22344 main.go:141] libmachine: STDERR: 
	I0520 05:06:12.181022   22344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2
	I0520 05:06:12.181028   22344 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:12.181061   22344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:99:d0:b0:4f:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2
	I0520 05:06:12.182908   22344 main.go:141] libmachine: STDOUT: 
	I0520 05:06:12.182922   22344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:12.182943   22344 client.go:171] duration metric: took 248.692709ms to LocalClient.Create
	I0520 05:06:14.185146   22344 start.go:128] duration metric: took 2.274607791s to createHost
	I0520 05:06:14.185255   22344 start.go:83] releasing machines lock for "flannel-458000", held for 2.274770292s
	W0520 05:06:14.185414   22344 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:14.201721   22344 out.go:177] * Deleting "flannel-458000" in qemu2 ...
	W0520 05:06:14.224783   22344 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:14.224811   22344 start.go:728] Will try again in 5 seconds ...
	I0520 05:06:19.226960   22344 start.go:360] acquireMachinesLock for flannel-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:19.227256   22344 start.go:364] duration metric: took 230.959µs to acquireMachinesLock for "flannel-458000"
	I0520 05:06:19.227307   22344 start.go:93] Provisioning new machine with config: &{Name:flannel-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:19.227423   22344 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:19.233803   22344 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:19.271707   22344 start.go:159] libmachine.API.Create for "flannel-458000" (driver="qemu2")
	I0520 05:06:19.271751   22344 client.go:168] LocalClient.Create starting
	I0520 05:06:19.271848   22344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:19.271922   22344 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:19.271939   22344 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:19.272002   22344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:19.272040   22344 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:19.272051   22344 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:19.272530   22344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:19.417817   22344 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:19.600220   22344 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:19.600229   22344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:19.600448   22344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2
	I0520 05:06:19.613493   22344 main.go:141] libmachine: STDOUT: 
	I0520 05:06:19.613517   22344 main.go:141] libmachine: STDERR: 
	I0520 05:06:19.613581   22344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2 +20000M
	I0520 05:06:19.624874   22344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:19.624893   22344 main.go:141] libmachine: STDERR: 
	I0520 05:06:19.624905   22344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2
	I0520 05:06:19.624910   22344 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:19.624947   22344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:a3:c9:f5:4c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/flannel-458000/disk.qcow2
	I0520 05:06:19.626804   22344 main.go:141] libmachine: STDOUT: 
	I0520 05:06:19.626821   22344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:19.626833   22344 client.go:171] duration metric: took 355.081459ms to LocalClient.Create
	I0520 05:06:21.629004   22344 start.go:128] duration metric: took 2.401569041s to createHost
	I0520 05:06:21.629141   22344 start.go:83] releasing machines lock for "flannel-458000", held for 2.401797084s
	W0520 05:06:21.629490   22344 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:21.640133   22344 out.go:177] 
	W0520 05:06:21.645104   22344 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:06:21.645160   22344 out.go:239] * 
	* 
	W0520 05:06:21.647836   22344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:21.657067   22344 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.759146166s)

                                                
                                                
-- stdout --
	* [enable-default-cni-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-458000" primary control-plane node in "enable-default-cni-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:06:24.030655   22462 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:06:24.030780   22462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:24.030783   22462 out.go:304] Setting ErrFile to fd 2...
	I0520 05:06:24.030786   22462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:24.030912   22462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:06:24.032063   22462 out.go:298] Setting JSON to false
	I0520 05:06:24.048503   22462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11155,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:06:24.048579   22462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:06:24.054092   22462 out.go:177] * [enable-default-cni-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:06:24.060916   22462 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:06:24.064988   22462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:06:24.060970   22462 notify.go:220] Checking for updates...
	I0520 05:06:24.068928   22462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:06:24.071967   22462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:06:24.074937   22462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:06:24.077934   22462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:06:24.081235   22462 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:06:24.081305   22462 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:06:24.081354   22462 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:06:24.085976   22462 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:06:24.092937   22462 start.go:297] selected driver: qemu2
	I0520 05:06:24.092946   22462 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:06:24.092953   22462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:06:24.095300   22462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:06:24.099941   22462 out.go:177] * Automatically selected the socket_vmnet network
	E0520 05:06:24.102992   22462 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0520 05:06:24.103006   22462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:06:24.103024   22462 cni.go:84] Creating CNI manager for "bridge"
	I0520 05:06:24.103028   22462 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 05:06:24.103069   22462 start.go:340] cluster config:
	{Name:enable-default-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:06:24.107624   22462 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:06:24.114913   22462 out.go:177] * Starting "enable-default-cni-458000" primary control-plane node in "enable-default-cni-458000" cluster
	I0520 05:06:24.118893   22462 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:06:24.118906   22462 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:06:24.118915   22462 cache.go:56] Caching tarball of preloaded images
	I0520 05:06:24.118971   22462 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:06:24.118977   22462 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:06:24.119033   22462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/enable-default-cni-458000/config.json ...
	I0520 05:06:24.119044   22462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/enable-default-cni-458000/config.json: {Name:mke7758de2aefea9075f7ec4216799ffcb61113a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:06:24.119253   22462 start.go:360] acquireMachinesLock for enable-default-cni-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:24.119288   22462 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "enable-default-cni-458000"
	I0520 05:06:24.119300   22462 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:24.119325   22462 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:24.125930   22462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:24.142290   22462 start.go:159] libmachine.API.Create for "enable-default-cni-458000" (driver="qemu2")
	I0520 05:06:24.142315   22462 client.go:168] LocalClient.Create starting
	I0520 05:06:24.142374   22462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:24.142405   22462 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:24.142413   22462 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:24.142457   22462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:24.142480   22462 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:24.142486   22462 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:24.142842   22462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:24.282987   22462 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:24.347819   22462 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:24.347826   22462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:24.348013   22462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2
	I0520 05:06:24.360492   22462 main.go:141] libmachine: STDOUT: 
	I0520 05:06:24.360514   22462 main.go:141] libmachine: STDERR: 
	I0520 05:06:24.360573   22462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2 +20000M
	I0520 05:06:24.371519   22462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:24.371545   22462 main.go:141] libmachine: STDERR: 
	I0520 05:06:24.371559   22462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2
	I0520 05:06:24.371567   22462 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:24.371594   22462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:84:07:ad:fc:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2
	I0520 05:06:24.373430   22462 main.go:141] libmachine: STDOUT: 
	I0520 05:06:24.373446   22462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:24.373469   22462 client.go:171] duration metric: took 231.150708ms to LocalClient.Create
	I0520 05:06:26.375683   22462 start.go:128] duration metric: took 2.256343292s to createHost
	I0520 05:06:26.375759   22462 start.go:83] releasing machines lock for "enable-default-cni-458000", held for 2.256481292s
	W0520 05:06:26.375811   22462 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:26.384406   22462 out.go:177] * Deleting "enable-default-cni-458000" in qemu2 ...
	W0520 05:06:26.406765   22462 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:26.406788   22462 start.go:728] Will try again in 5 seconds ...
	I0520 05:06:31.409056   22462 start.go:360] acquireMachinesLock for enable-default-cni-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:31.409645   22462 start.go:364] duration metric: took 485.292µs to acquireMachinesLock for "enable-default-cni-458000"
	I0520 05:06:31.409800   22462 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:31.410261   22462 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:31.417039   22462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:31.466394   22462 start.go:159] libmachine.API.Create for "enable-default-cni-458000" (driver="qemu2")
	I0520 05:06:31.466446   22462 client.go:168] LocalClient.Create starting
	I0520 05:06:31.466570   22462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:31.466646   22462 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:31.466661   22462 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:31.466749   22462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:31.466795   22462 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:31.466805   22462 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:31.467344   22462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:31.615861   22462 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:31.691446   22462 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:31.691452   22462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:31.691647   22462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2
	I0520 05:06:31.704629   22462 main.go:141] libmachine: STDOUT: 
	I0520 05:06:31.704651   22462 main.go:141] libmachine: STDERR: 
	I0520 05:06:31.704729   22462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2 +20000M
	I0520 05:06:31.715967   22462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:31.715987   22462 main.go:141] libmachine: STDERR: 
	I0520 05:06:31.715998   22462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2
	I0520 05:06:31.716001   22462 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:31.716029   22462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:b9:28:51:ec:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/enable-default-cni-458000/disk.qcow2
	I0520 05:06:31.717802   22462 main.go:141] libmachine: STDOUT: 
	I0520 05:06:31.717819   22462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:31.717832   22462 client.go:171] duration metric: took 251.382541ms to LocalClient.Create
	I0520 05:06:33.720039   22462 start.go:128] duration metric: took 2.309752416s to createHost
	I0520 05:06:33.720265   22462 start.go:83] releasing machines lock for "enable-default-cni-458000", held for 2.310532541s
	W0520 05:06:33.720583   22462 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:33.730267   22462 out.go:177] 
	W0520 05:06:33.735354   22462 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:06:33.735443   22462 out.go:239] * 
	* 
	W0520 05:06:33.738220   22462 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:33.745224   22462 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.837821875s)

                                                
                                                
-- stdout --
	* [bridge-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-458000" primary control-plane node in "bridge-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:06:35.938129   22572 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:06:35.938258   22572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:35.938262   22572 out.go:304] Setting ErrFile to fd 2...
	I0520 05:06:35.938264   22572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:35.938424   22572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:06:35.939659   22572 out.go:298] Setting JSON to false
	I0520 05:06:35.956160   22572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11166,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:06:35.956228   22572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:06:35.961838   22572 out.go:177] * [bridge-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:06:35.969692   22572 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:06:35.973678   22572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:06:35.969756   22572 notify.go:220] Checking for updates...
	I0520 05:06:35.976699   22572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:06:35.979734   22572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:06:35.982662   22572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:06:35.985727   22572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:06:35.989039   22572 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:06:35.989108   22572 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:06:35.989161   22572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:06:35.993699   22572 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:06:36.000630   22572 start.go:297] selected driver: qemu2
	I0520 05:06:36.000640   22572 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:06:36.000646   22572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:06:36.003012   22572 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:06:36.005641   22572 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:06:36.008819   22572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:06:36.008837   22572 cni.go:84] Creating CNI manager for "bridge"
	I0520 05:06:36.008840   22572 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 05:06:36.008874   22572 start.go:340] cluster config:
	{Name:bridge-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:06:36.013454   22572 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:06:36.020687   22572 out.go:177] * Starting "bridge-458000" primary control-plane node in "bridge-458000" cluster
	I0520 05:06:36.024504   22572 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:06:36.024520   22572 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:06:36.024533   22572 cache.go:56] Caching tarball of preloaded images
	I0520 05:06:36.024582   22572 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:06:36.024587   22572 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:06:36.024639   22572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/bridge-458000/config.json ...
	I0520 05:06:36.024649   22572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/bridge-458000/config.json: {Name:mkb73078adae4a0e16cc25f0d154adec9192f1c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:06:36.024961   22572 start.go:360] acquireMachinesLock for bridge-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:36.024993   22572 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "bridge-458000"
	I0520 05:06:36.025004   22572 start.go:93] Provisioning new machine with config: &{Name:bridge-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:36.025032   22572 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:36.032534   22572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:36.048703   22572 start.go:159] libmachine.API.Create for "bridge-458000" (driver="qemu2")
	I0520 05:06:36.048731   22572 client.go:168] LocalClient.Create starting
	I0520 05:06:36.048789   22572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:36.048824   22572 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:36.048834   22572 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:36.048875   22572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:36.048897   22572 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:36.048904   22572 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:36.049330   22572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:36.189399   22572 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:36.364840   22572 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:36.364850   22572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:36.365055   22572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2
	I0520 05:06:36.377963   22572 main.go:141] libmachine: STDOUT: 
	I0520 05:06:36.377984   22572 main.go:141] libmachine: STDERR: 
	I0520 05:06:36.378032   22572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2 +20000M
	I0520 05:06:36.388882   22572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:36.388905   22572 main.go:141] libmachine: STDERR: 
	I0520 05:06:36.388933   22572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2
	I0520 05:06:36.388939   22572 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:36.388977   22572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:7a:b9:b3:f9:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2
	I0520 05:06:36.390750   22572 main.go:141] libmachine: STDOUT: 
	I0520 05:06:36.390766   22572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:36.390789   22572 client.go:171] duration metric: took 342.056834ms to LocalClient.Create
	I0520 05:06:38.392914   22572 start.go:128] duration metric: took 2.367884167s to createHost
	I0520 05:06:38.392950   22572 start.go:83] releasing machines lock for "bridge-458000", held for 2.367967833s
	W0520 05:06:38.392989   22572 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:38.407582   22572 out.go:177] * Deleting "bridge-458000" in qemu2 ...
	W0520 05:06:38.424647   22572 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:38.424662   22572 start.go:728] Will try again in 5 seconds ...
	I0520 05:06:43.426770   22572 start.go:360] acquireMachinesLock for bridge-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:43.427009   22572 start.go:364] duration metric: took 196.417µs to acquireMachinesLock for "bridge-458000"
	I0520 05:06:43.427035   22572 start.go:93] Provisioning new machine with config: &{Name:bridge-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:43.427108   22572 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:43.436487   22572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:43.455378   22572 start.go:159] libmachine.API.Create for "bridge-458000" (driver="qemu2")
	I0520 05:06:43.455412   22572 client.go:168] LocalClient.Create starting
	I0520 05:06:43.455481   22572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:43.455522   22572 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:43.455532   22572 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:43.455570   22572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:43.455594   22572 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:43.455601   22572 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:43.455992   22572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:43.593970   22572 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:43.683292   22572 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:43.683298   22572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:43.683491   22572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2
	I0520 05:06:43.696185   22572 main.go:141] libmachine: STDOUT: 
	I0520 05:06:43.696209   22572 main.go:141] libmachine: STDERR: 
	I0520 05:06:43.696261   22572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2 +20000M
	I0520 05:06:43.707407   22572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:43.707427   22572 main.go:141] libmachine: STDERR: 
	I0520 05:06:43.707461   22572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2
	I0520 05:06:43.707466   22572 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:43.707503   22572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:54:9b:d1:2b:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/bridge-458000/disk.qcow2
	I0520 05:06:43.709209   22572 main.go:141] libmachine: STDOUT: 
	I0520 05:06:43.709228   22572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:43.709241   22572 client.go:171] duration metric: took 253.827417ms to LocalClient.Create
	I0520 05:06:45.711443   22572 start.go:128] duration metric: took 2.284324041s to createHost
	I0520 05:06:45.711519   22572 start.go:83] releasing machines lock for "bridge-458000", held for 2.284515833s
	W0520 05:06:45.711947   22572 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:45.720603   22572 out.go:177] 
	W0520 05:06:45.725812   22572 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:06:45.725845   22572 out.go:239] * 
	* 
	W0520 05:06:45.728573   22572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:45.735582   22572 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-458000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.874426333s)

                                                
                                                
-- stdout --
	* [kubenet-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-458000" primary control-plane node in "kubenet-458000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-458000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:06:47.893442   22682 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:06:47.893566   22682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:47.893571   22682 out.go:304] Setting ErrFile to fd 2...
	I0520 05:06:47.893573   22682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:47.893700   22682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:06:47.894771   22682 out.go:298] Setting JSON to false
	I0520 05:06:47.911368   22682 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11178,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:06:47.911431   22682 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:06:47.917240   22682 out.go:177] * [kubenet-458000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:06:47.924228   22682 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:06:47.928199   22682 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:06:47.924274   22682 notify.go:220] Checking for updates...
	I0520 05:06:47.934149   22682 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:06:47.937225   22682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:06:47.940196   22682 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:06:47.943172   22682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:06:47.946588   22682 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:06:47.946652   22682 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:06:47.946717   22682 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:06:47.951109   22682 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:06:47.958200   22682 start.go:297] selected driver: qemu2
	I0520 05:06:47.958212   22682 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:06:47.958219   22682 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:06:47.960519   22682 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:06:47.964147   22682 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:06:47.967271   22682 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:06:47.967287   22682 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0520 05:06:47.967314   22682 start.go:340] cluster config:
	{Name:kubenet-458000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubenet-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:06:47.971581   22682 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:06:47.979176   22682 out.go:177] * Starting "kubenet-458000" primary control-plane node in "kubenet-458000" cluster
	I0520 05:06:47.983245   22682 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:06:47.983262   22682 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:06:47.983278   22682 cache.go:56] Caching tarball of preloaded images
	I0520 05:06:47.983333   22682 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:06:47.983337   22682 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:06:47.983391   22682 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/kubenet-458000/config.json ...
	I0520 05:06:47.983401   22682 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/kubenet-458000/config.json: {Name:mk4a985c1532ec6bb129c0c97fe5ff87905b9ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:06:47.983591   22682 start.go:360] acquireMachinesLock for kubenet-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:47.983620   22682 start.go:364] duration metric: took 24.166µs to acquireMachinesLock for "kubenet-458000"
	I0520 05:06:47.983631   22682 start.go:93] Provisioning new machine with config: &{Name:kubenet-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:47.983656   22682 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:47.992158   22682 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:48.007336   22682 start.go:159] libmachine.API.Create for "kubenet-458000" (driver="qemu2")
	I0520 05:06:48.007367   22682 client.go:168] LocalClient.Create starting
	I0520 05:06:48.007441   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:48.007477   22682 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:48.007489   22682 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:48.007530   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:48.007552   22682 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:48.007562   22682 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:48.008028   22682 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:48.145390   22682 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:48.232020   22682 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:48.232026   22682 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:48.232196   22682 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2
	I0520 05:06:48.245107   22682 main.go:141] libmachine: STDOUT: 
	I0520 05:06:48.245128   22682 main.go:141] libmachine: STDERR: 
	I0520 05:06:48.245200   22682 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2 +20000M
	I0520 05:06:48.256627   22682 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:48.256641   22682 main.go:141] libmachine: STDERR: 
	I0520 05:06:48.256655   22682 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2
	I0520 05:06:48.256662   22682 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:48.256698   22682 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e2:66:86:07:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2
	I0520 05:06:48.258531   22682 main.go:141] libmachine: STDOUT: 
	I0520 05:06:48.258548   22682 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:48.258568   22682 client.go:171] duration metric: took 251.197291ms to LocalClient.Create
	I0520 05:06:50.260638   22682 start.go:128] duration metric: took 2.276992542s to createHost
	I0520 05:06:50.260658   22682 start.go:83] releasing machines lock for "kubenet-458000", held for 2.277050834s
	W0520 05:06:50.260672   22682 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:50.265223   22682 out.go:177] * Deleting "kubenet-458000" in qemu2 ...
	W0520 05:06:50.273133   22682 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:50.273148   22682 start.go:728] Will try again in 5 seconds ...
	I0520 05:06:55.273704   22682 start.go:360] acquireMachinesLock for kubenet-458000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:06:55.274158   22682 start.go:364] duration metric: took 382.75µs to acquireMachinesLock for "kubenet-458000"
	I0520 05:06:55.274268   22682 start.go:93] Provisioning new machine with config: &{Name:kubenet-458000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-458000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:06:55.274527   22682 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:06:55.280975   22682 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 05:06:55.324334   22682 start.go:159] libmachine.API.Create for "kubenet-458000" (driver="qemu2")
	I0520 05:06:55.324390   22682 client.go:168] LocalClient.Create starting
	I0520 05:06:55.324520   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:06:55.324584   22682 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:55.324603   22682 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:55.324666   22682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:06:55.324709   22682 main.go:141] libmachine: Decoding PEM data...
	I0520 05:06:55.324723   22682 main.go:141] libmachine: Parsing certificate...
	I0520 05:06:55.325236   22682 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:06:55.470351   22682 main.go:141] libmachine: Creating SSH key...
	I0520 05:06:55.674560   22682 main.go:141] libmachine: Creating Disk image...
	I0520 05:06:55.674574   22682 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:06:55.674818   22682 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2
	I0520 05:06:55.688450   22682 main.go:141] libmachine: STDOUT: 
	I0520 05:06:55.688470   22682 main.go:141] libmachine: STDERR: 
	I0520 05:06:55.688537   22682 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2 +20000M
	I0520 05:06:55.699913   22682 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:06:55.699930   22682 main.go:141] libmachine: STDERR: 
	I0520 05:06:55.699945   22682 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2
	I0520 05:06:55.699952   22682 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:06:55.699989   22682 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a3:60:a1:bd:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/kubenet-458000/disk.qcow2
	I0520 05:06:55.701896   22682 main.go:141] libmachine: STDOUT: 
	I0520 05:06:55.701910   22682 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:06:55.701921   22682 client.go:171] duration metric: took 377.528667ms to LocalClient.Create
	I0520 05:06:57.704131   22682 start.go:128] duration metric: took 2.429579708s to createHost
	I0520 05:06:57.704260   22682 start.go:83] releasing machines lock for "kubenet-458000", held for 2.430102583s
	W0520 05:06:57.704659   22682 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-458000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:06:57.713363   22682 out.go:177] 
	W0520 05:06:57.717407   22682 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:06:57.717447   22682 out.go:239] * 
	* 
	W0520 05:06:57.720265   22682 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:06:57.726395   22682 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-593000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-593000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.987261667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-593000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-593000" primary control-plane node in "old-k8s-version-593000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-593000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:06:59.924388   22796 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:06:59.924522   22796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:59.924526   22796 out.go:304] Setting ErrFile to fd 2...
	I0520 05:06:59.924528   22796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:06:59.924646   22796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:06:59.925756   22796 out.go:298] Setting JSON to false
	I0520 05:06:59.942340   22796 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11190,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:06:59.942405   22796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:06:59.948439   22796 out.go:177] * [old-k8s-version-593000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:06:59.955500   22796 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:06:59.955601   22796 notify.go:220] Checking for updates...
	I0520 05:06:59.962414   22796 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:06:59.965476   22796 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:06:59.968392   22796 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:06:59.971424   22796 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:06:59.974444   22796 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:06:59.977778   22796 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:06:59.977848   22796 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:06:59.977893   22796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:06:59.982407   22796 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:06:59.995950   22796 start.go:297] selected driver: qemu2
	I0520 05:06:59.995957   22796 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:06:59.995964   22796 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:06:59.998104   22796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:07:00.002430   22796 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:07:00.005636   22796 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:00.005653   22796 cni.go:84] Creating CNI manager for ""
	I0520 05:07:00.005659   22796 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 05:07:00.005705   22796 start.go:340] cluster config:
	{Name:old-k8s-version-593000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:00.009869   22796 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:00.017453   22796 out.go:177] * Starting "old-k8s-version-593000" primary control-plane node in "old-k8s-version-593000" cluster
	I0520 05:07:00.021472   22796 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 05:07:00.021498   22796 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 05:07:00.021511   22796 cache.go:56] Caching tarball of preloaded images
	I0520 05:07:00.021577   22796 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:07:00.021582   22796 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 05:07:00.021643   22796 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/old-k8s-version-593000/config.json ...
	I0520 05:07:00.021654   22796 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/old-k8s-version-593000/config.json: {Name:mk9f1555483912357a9b54c50f33ecc096b7af93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:07:00.021861   22796 start.go:360] acquireMachinesLock for old-k8s-version-593000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:00.021891   22796 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "old-k8s-version-593000"
	I0520 05:07:00.021903   22796 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:00.021930   22796 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:00.025521   22796 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:00.040875   22796 start.go:159] libmachine.API.Create for "old-k8s-version-593000" (driver="qemu2")
	I0520 05:07:00.040910   22796 client.go:168] LocalClient.Create starting
	I0520 05:07:00.040972   22796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:00.041003   22796 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:00.041011   22796 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:00.041048   22796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:00.041070   22796 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:00.041077   22796 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:00.041506   22796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:00.201120   22796 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:00.323619   22796 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:00.323629   22796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:00.323823   22796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:00.337764   22796 main.go:141] libmachine: STDOUT: 
	I0520 05:07:00.337784   22796 main.go:141] libmachine: STDERR: 
	I0520 05:07:00.337853   22796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2 +20000M
	I0520 05:07:00.349679   22796 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:00.349698   22796 main.go:141] libmachine: STDERR: 
	I0520 05:07:00.349727   22796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:00.349731   22796 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:00.349767   22796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:16:20:6e:2b:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:00.351650   22796 main.go:141] libmachine: STDOUT: 
	I0520 05:07:00.351664   22796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:00.351683   22796 client.go:171] duration metric: took 310.77025ms to LocalClient.Create
	I0520 05:07:02.353903   22796 start.go:128] duration metric: took 2.331964417s to createHost
	I0520 05:07:02.353969   22796 start.go:83] releasing machines lock for "old-k8s-version-593000", held for 2.332088625s
	W0520 05:07:02.354031   22796 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:02.365777   22796 out.go:177] * Deleting "old-k8s-version-593000" in qemu2 ...
	W0520 05:07:02.388516   22796 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:02.388548   22796 start.go:728] Will try again in 5 seconds ...
	I0520 05:07:07.390642   22796 start.go:360] acquireMachinesLock for old-k8s-version-593000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:07.390940   22796 start.go:364] duration metric: took 229.875µs to acquireMachinesLock for "old-k8s-version-593000"
	I0520 05:07:07.391022   22796 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:07.391209   22796 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:07.397321   22796 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:07.440011   22796 start.go:159] libmachine.API.Create for "old-k8s-version-593000" (driver="qemu2")
	I0520 05:07:07.440059   22796 client.go:168] LocalClient.Create starting
	I0520 05:07:07.440211   22796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:07.440298   22796 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:07.440325   22796 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:07.440390   22796 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:07.440436   22796 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:07.440453   22796 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:07.440985   22796 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:07.591285   22796 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:07.809113   22796 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:07.809126   22796 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:07.809333   22796 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:07.822714   22796 main.go:141] libmachine: STDOUT: 
	I0520 05:07:07.822739   22796 main.go:141] libmachine: STDERR: 
	I0520 05:07:07.822822   22796 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2 +20000M
	I0520 05:07:07.834463   22796 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:07.834480   22796 main.go:141] libmachine: STDERR: 
	I0520 05:07:07.834494   22796 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:07.834499   22796 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:07.834544   22796 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:5f:ac:f8:08:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:07.836349   22796 main.go:141] libmachine: STDOUT: 
	I0520 05:07:07.836364   22796 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:07.836379   22796 client.go:171] duration metric: took 396.315166ms to LocalClient.Create
	I0520 05:07:09.838585   22796 start.go:128] duration metric: took 2.4473525s to createHost
	I0520 05:07:09.838671   22796 start.go:83] releasing machines lock for "old-k8s-version-593000", held for 2.447728875s
	W0520 05:07:09.839116   22796 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:09.854879   22796 out.go:177] 
	W0520 05:07:09.858958   22796 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:09.859022   22796 out.go:239] * 
	* 
	W0520 05:07:09.861670   22796 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:09.871851   22796 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-593000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (65.888792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-593000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-593000 create -f testdata/busybox.yaml: exit status 1 (31.358334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-593000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-593000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (28.40525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (28.081125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-593000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-593000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-593000 describe deploy/metrics-server -n kube-system: exit status 1 (26.985459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-593000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-593000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (28.157041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-593000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-593000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.190728s)

                                                
                                                
-- stdout --
	* [old-k8s-version-593000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-593000" primary control-plane node in "old-k8s-version-593000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-593000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-593000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:13.565021   22849 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:13.565152   22849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:13.565155   22849 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:13.565158   22849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:13.565284   22849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:13.566268   22849 out.go:298] Setting JSON to false
	I0520 05:07:13.582664   22849 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11204,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:13.582729   22849 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:13.587697   22849 out.go:177] * [old-k8s-version-593000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:13.594619   22849 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:13.594654   22849 notify.go:220] Checking for updates...
	I0520 05:07:13.602465   22849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:13.605654   22849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:13.608684   22849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:13.611669   22849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:13.614651   22849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:13.617962   22849 config.go:182] Loaded profile config "old-k8s-version-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 05:07:13.621668   22849 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 05:07:13.624685   22849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:13.628629   22849 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 05:07:13.635752   22849 start.go:297] selected driver: qemu2
	I0520 05:07:13.635757   22849 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:13.635805   22849 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:13.638168   22849 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:13.638193   22849 cni.go:84] Creating CNI manager for ""
	I0520 05:07:13.638199   22849 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 05:07:13.638220   22849 start.go:340] cluster config:
	{Name:old-k8s-version-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:13.642704   22849 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:13.649473   22849 out.go:177] * Starting "old-k8s-version-593000" primary control-plane node in "old-k8s-version-593000" cluster
	I0520 05:07:13.653668   22849 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 05:07:13.653685   22849 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 05:07:13.653705   22849 cache.go:56] Caching tarball of preloaded images
	I0520 05:07:13.653800   22849 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:07:13.653805   22849 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 05:07:13.653859   22849 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/old-k8s-version-593000/config.json ...
	I0520 05:07:13.654316   22849 start.go:360] acquireMachinesLock for old-k8s-version-593000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:13.654348   22849 start.go:364] duration metric: took 23.209µs to acquireMachinesLock for "old-k8s-version-593000"
	I0520 05:07:13.654357   22849 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:07:13.654364   22849 fix.go:54] fixHost starting: 
	I0520 05:07:13.654479   22849 fix.go:112] recreateIfNeeded on old-k8s-version-593000: state=Stopped err=<nil>
	W0520 05:07:13.654488   22849 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:07:13.658574   22849 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-593000" ...
	I0520 05:07:13.666692   22849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:5f:ac:f8:08:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:13.669013   22849 main.go:141] libmachine: STDOUT: 
	I0520 05:07:13.669036   22849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:13.669067   22849 fix.go:56] duration metric: took 14.7035ms for fixHost
	I0520 05:07:13.669072   22849 start.go:83] releasing machines lock for "old-k8s-version-593000", held for 14.719625ms
	W0520 05:07:13.669080   22849 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:13.669123   22849 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:13.669129   22849 start.go:728] Will try again in 5 seconds ...
	I0520 05:07:18.670515   22849 start.go:360] acquireMachinesLock for old-k8s-version-593000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:18.671054   22849 start.go:364] duration metric: took 422.666µs to acquireMachinesLock for "old-k8s-version-593000"
	I0520 05:07:18.671153   22849 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:07:18.671175   22849 fix.go:54] fixHost starting: 
	I0520 05:07:18.671945   22849 fix.go:112] recreateIfNeeded on old-k8s-version-593000: state=Stopped err=<nil>
	W0520 05:07:18.671971   22849 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:07:18.681873   22849 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-593000" ...
	I0520 05:07:18.686026   22849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:5f:ac:f8:08:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/old-k8s-version-593000/disk.qcow2
	I0520 05:07:18.695695   22849 main.go:141] libmachine: STDOUT: 
	I0520 05:07:18.695762   22849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:18.695842   22849 fix.go:56] duration metric: took 24.671333ms for fixHost
	I0520 05:07:18.695857   22849 start.go:83] releasing machines lock for "old-k8s-version-593000", held for 24.781166ms
	W0520 05:07:18.696043   22849 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-593000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-593000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:18.703808   22849 out.go:177] 
	W0520 05:07:18.707834   22849 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:18.707879   22849 out.go:239] * 
	* 
	W0520 05:07:18.710679   22849 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:18.716812   22849 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-593000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (50.731416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-593000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (30.87925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-593000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-593000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-593000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.786917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-593000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-593000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (28.865417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-593000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (28.626292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-593000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-593000 --alsologtostderr -v=1: exit status 83 (40.515666ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-593000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-593000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:18.965436   22868 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:18.966361   22868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:18.966365   22868 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:18.966367   22868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:18.966524   22868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:18.966736   22868 out.go:298] Setting JSON to false
	I0520 05:07:18.966741   22868 mustload.go:65] Loading cluster: old-k8s-version-593000
	I0520 05:07:18.966933   22868 config.go:182] Loaded profile config "old-k8s-version-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 05:07:18.971765   22868 out.go:177] * The control-plane node old-k8s-version-593000 host is not running: state=Stopped
	I0520 05:07:18.975501   22868 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-593000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-593000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (28.386208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (27.8215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-593000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-829000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-829000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.805812708s)

                                                
                                                
-- stdout --
	* [no-preload-829000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-829000" primary control-plane node in "no-preload-829000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-829000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:19.415183   22891 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:19.415329   22891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:19.415332   22891 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:19.415334   22891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:19.415464   22891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:19.416589   22891 out.go:298] Setting JSON to false
	I0520 05:07:19.433640   22891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11210,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:19.433737   22891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:19.437881   22891 out.go:177] * [no-preload-829000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:19.444945   22891 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:19.444988   22891 notify.go:220] Checking for updates...
	I0520 05:07:19.451796   22891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:19.454817   22891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:19.457852   22891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:19.460841   22891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:19.463796   22891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:19.467149   22891 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:19.467219   22891 config.go:182] Loaded profile config "stopped-upgrade-298000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 05:07:19.467261   22891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:19.471711   22891 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:07:19.478823   22891 start.go:297] selected driver: qemu2
	I0520 05:07:19.478829   22891 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:07:19.478834   22891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:19.481124   22891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:07:19.483764   22891 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:07:19.486941   22891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:19.486962   22891 cni.go:84] Creating CNI manager for ""
	I0520 05:07:19.486968   22891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:07:19.486972   22891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 05:07:19.487002   22891 start.go:340] cluster config:
	{Name:no-preload-829000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:19.491469   22891 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.498809   22891 out.go:177] * Starting "no-preload-829000" primary control-plane node in "no-preload-829000" cluster
	I0520 05:07:19.502777   22891 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:07:19.502844   22891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/no-preload-829000/config.json ...
	I0520 05:07:19.502861   22891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/no-preload-829000/config.json: {Name:mk050ab22658bd5e21156679713318cc1c56e66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:07:19.502864   22891 cache.go:107] acquiring lock: {Name:mk95541300b9ab09f76a4eea8dd4c3806294ac6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.502864   22891 cache.go:107] acquiring lock: {Name:mk3c521f92bf831b8ac3c11deeba84679ef9dccc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.502871   22891 cache.go:107] acquiring lock: {Name:mkde40ebee6ad466c586b2933fa899d685b4e600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.502890   22891 cache.go:107] acquiring lock: {Name:mka89e1414e21febe4d538018da5a187fed7989b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.502913   22891 cache.go:107] acquiring lock: {Name:mka8c004d4a9b3f95cb05604aeff0479e5cfe701 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.502945   22891 cache.go:107] acquiring lock: {Name:mked80a20c6a2a7b3c6d74adc31c804fd0ab0343 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.503021   22891 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:07:19.503002   22891 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:07:19.503063   22891 cache.go:107] acquiring lock: {Name:mkf183d1dc82619ed7d576b20f1c40ae3b252b3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.503170   22891 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:07:19.503161   22891 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 05:07:19.503213   22891 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:07:19.503213   22891 cache.go:107] acquiring lock: {Name:mk50d2c3cd74ac3f9f6646e09855626c0b7255cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:19.503206   22891 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 336.916µs
	I0520 05:07:19.503231   22891 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 05:07:19.503242   22891 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 05:07:19.503263   22891 start.go:360] acquireMachinesLock for no-preload-829000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:19.503311   22891 start.go:364] duration metric: took 42.583µs to acquireMachinesLock for "no-preload-829000"
	I0520 05:07:19.503332   22891 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 05:07:19.503348   22891 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:07:19.503327   22891 start.go:93] Provisioning new machine with config: &{Name:no-preload-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:19.503370   22891 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:19.511803   22891 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:19.517787   22891 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 05:07:19.518315   22891 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 05:07:19.518421   22891 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 05:07:19.518449   22891 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 05:07:19.518470   22891 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 05:07:19.520175   22891 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 05:07:19.520331   22891 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 05:07:19.527800   22891 start.go:159] libmachine.API.Create for "no-preload-829000" (driver="qemu2")
	I0520 05:07:19.527822   22891 client.go:168] LocalClient.Create starting
	I0520 05:07:19.527881   22891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:19.527911   22891 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:19.527921   22891 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:19.527958   22891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:19.527980   22891 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:19.527987   22891 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:19.528366   22891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:19.672433   22891 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:19.762604   22891 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:19.762640   22891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:19.762927   22891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:19.776468   22891 main.go:141] libmachine: STDOUT: 
	I0520 05:07:19.776484   22891 main.go:141] libmachine: STDERR: 
	I0520 05:07:19.776542   22891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2 +20000M
	I0520 05:07:19.788524   22891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:19.788549   22891 main.go:141] libmachine: STDERR: 
	I0520 05:07:19.788563   22891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:19.788568   22891 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:19.788603   22891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:07:e1:8d:6f:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:19.790503   22891 main.go:141] libmachine: STDOUT: 
	I0520 05:07:19.790521   22891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:19.790540   22891 client.go:171] duration metric: took 262.71525ms to LocalClient.Create
	I0520 05:07:19.887633   22891 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 05:07:19.900436   22891 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 05:07:19.917598   22891 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 05:07:19.934458   22891 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 05:07:19.950202   22891 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 05:07:19.982239   22891 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0520 05:07:20.009301   22891 cache.go:162] opening:  /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0520 05:07:20.144061   22891 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 05:07:20.144090   22891 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 641.126833ms
	I0520 05:07:20.144106   22891 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 05:07:21.790672   22891 start.go:128] duration metric: took 2.287299709s to createHost
	I0520 05:07:21.790700   22891 start.go:83] releasing machines lock for "no-preload-829000", held for 2.287397333s
	W0520 05:07:21.790738   22891 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:21.800836   22891 out.go:177] * Deleting "no-preload-829000" in qemu2 ...
	W0520 05:07:21.817064   22891 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:21.817080   22891 start.go:728] Will try again in 5 seconds ...
	I0520 05:07:22.210379   22891 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 05:07:22.210438   22891 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 2.707565125s
	I0520 05:07:22.210455   22891 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 05:07:22.974553   22891 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 05:07:22.974570   22891 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.471652542s
	I0520 05:07:22.974581   22891 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 05:07:23.335720   22891 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 05:07:23.335736   22891 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 3.832849375s
	I0520 05:07:23.335744   22891 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 05:07:24.936702   22891 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 05:07:24.936726   22891 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 5.43389925s
	I0520 05:07:24.936742   22891 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 05:07:25.099823   22891 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 05:07:25.099874   22891 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 5.597044s
	I0520 05:07:25.099900   22891 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 05:07:26.817169   22891 start.go:360] acquireMachinesLock for no-preload-829000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:26.817583   22891 start.go:364] duration metric: took 348.791µs to acquireMachinesLock for "no-preload-829000"
	I0520 05:07:26.817679   22891 start.go:93] Provisioning new machine with config: &{Name:no-preload-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:26.817880   22891 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:26.826458   22891 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:26.867585   22891 start.go:159] libmachine.API.Create for "no-preload-829000" (driver="qemu2")
	I0520 05:07:26.867647   22891 client.go:168] LocalClient.Create starting
	I0520 05:07:26.867842   22891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:26.867926   22891 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:26.867947   22891 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:26.868038   22891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:26.868078   22891 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:26.868098   22891 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:26.868596   22891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:27.015740   22891 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:27.123548   22891 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:27.123556   22891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:27.123739   22891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:27.136573   22891 main.go:141] libmachine: STDOUT: 
	I0520 05:07:27.136598   22891 main.go:141] libmachine: STDERR: 
	I0520 05:07:27.136660   22891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2 +20000M
	I0520 05:07:27.148395   22891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:27.148417   22891 main.go:141] libmachine: STDERR: 
	I0520 05:07:27.148432   22891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:27.148444   22891 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:27.148503   22891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2f:7b:d5:12:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:27.150499   22891 main.go:141] libmachine: STDOUT: 
	I0520 05:07:27.150517   22891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:27.150531   22891 client.go:171] duration metric: took 282.876833ms to LocalClient.Create
	I0520 05:07:28.117086   22891 cache.go:157] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 05:07:28.117141   22891 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.613988042s
	I0520 05:07:28.117162   22891 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 05:07:28.117191   22891 cache.go:87] Successfully saved all images to host disk.
	I0520 05:07:29.151096   22891 start.go:128] duration metric: took 2.333179166s to createHost
	I0520 05:07:29.151176   22891 start.go:83] releasing machines lock for "no-preload-829000", held for 2.333592458s
	W0520 05:07:29.151515   22891 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:29.161090   22891 out.go:177] 
	W0520 05:07:29.167205   22891 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:29.167232   22891 out.go:239] * 
	* 
	W0520 05:07:29.169932   22891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:29.179137   22891 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-829000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (63.263625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-829000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-829000 create -f testdata/busybox.yaml: exit status 1 (31.266541ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-829000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-829000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (29.191791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (28.865292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-829000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-829000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-829000 describe deploy/metrics-server -n kube-system: exit status 1 (27.345708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-829000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-829000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (29.149792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-829000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-829000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.169951875s)

                                                
                                                
-- stdout --
	* [no-preload-829000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-829000" primary control-plane node in "no-preload-829000" cluster
	* Restarting existing qemu2 VM for "no-preload-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:33.020868   22976 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:33.021082   22976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:33.021086   22976 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:33.021088   22976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:33.021230   22976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:33.022298   22976 out.go:298] Setting JSON to false
	I0520 05:07:33.039000   22976 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11224,"bootTime":1716195629,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:33.039066   22976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:33.043685   22976 out.go:177] * [no-preload-829000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:33.049622   22976 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:33.049694   22976 notify.go:220] Checking for updates...
	I0520 05:07:33.055635   22976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:33.058676   22976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:33.061574   22976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:33.064638   22976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:33.065945   22976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:33.068989   22976 config.go:182] Loaded profile config "no-preload-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:33.069237   22976 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:33.073621   22976 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 05:07:33.078614   22976 start.go:297] selected driver: qemu2
	I0520 05:07:33.078620   22976 start.go:901] validating driver "qemu2" against &{Name:no-preload-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:no-preload-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:33.078662   22976 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:33.080848   22976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:33.080872   22976 cni.go:84] Creating CNI manager for ""
	I0520 05:07:33.080878   22976 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:07:33.080905   22976 start.go:340] cluster config:
	{Name:no-preload-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-829000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:33.085042   22976 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.092553   22976 out.go:177] * Starting "no-preload-829000" primary control-plane node in "no-preload-829000" cluster
	I0520 05:07:33.096622   22976 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:07:33.096701   22976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/no-preload-829000/config.json ...
	I0520 05:07:33.096724   22976 cache.go:107] acquiring lock: {Name:mk95541300b9ab09f76a4eea8dd4c3806294ac6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096767   22976 cache.go:107] acquiring lock: {Name:mkde40ebee6ad466c586b2933fa899d685b4e600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096766   22976 cache.go:107] acquiring lock: {Name:mk3c521f92bf831b8ac3c11deeba84679ef9dccc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096791   22976 cache.go:107] acquiring lock: {Name:mkf183d1dc82619ed7d576b20f1c40ae3b252b3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096793   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 05:07:33.096802   22976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.958µs
	I0520 05:07:33.096808   22976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 05:07:33.096824   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 05:07:33.096829   22976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 100.667µs
	I0520 05:07:33.096834   22976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 05:07:33.096829   22976 cache.go:107] acquiring lock: {Name:mked80a20c6a2a7b3c6d74adc31c804fd0ab0343 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096843   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 05:07:33.096821   22976 cache.go:107] acquiring lock: {Name:mk50d2c3cd74ac3f9f6646e09855626c0b7255cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096866   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 05:07:33.096859   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 05:07:33.096870   22976 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 41.334µs
	I0520 05:07:33.096873   22976 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 05:07:33.096872   22976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 148.416µs
	I0520 05:07:33.096861   22976 cache.go:107] acquiring lock: {Name:mka8c004d4a9b3f95cb05604aeff0479e5cfe701 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096848   22976 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 57.333µs
	I0520 05:07:33.096876   22976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 05:07:33.096909   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 05:07:33.096895   22976 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 05:07:33.096913   22976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 53.125µs
	I0520 05:07:33.096917   22976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 05:07:33.096930   22976 cache.go:107] acquiring lock: {Name:mka89e1414e21febe4d538018da5a187fed7989b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:33.096970   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 05:07:33.096983   22976 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 161.791µs
	I0520 05:07:33.096985   22976 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 05:07:33.096974   22976 cache.go:115] /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 05:07:33.096989   22976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 116.333µs
	I0520 05:07:33.096993   22976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 05:07:33.096996   22976 cache.go:87] Successfully saved all images to host disk.
	I0520 05:07:33.097100   22976 start.go:360] acquireMachinesLock for no-preload-829000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:33.097132   22976 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "no-preload-829000"
	I0520 05:07:33.097142   22976 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:07:33.097147   22976 fix.go:54] fixHost starting: 
	I0520 05:07:33.097252   22976 fix.go:112] recreateIfNeeded on no-preload-829000: state=Stopped err=<nil>
	W0520 05:07:33.097260   22976 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:07:33.105609   22976 out.go:177] * Restarting existing qemu2 VM for "no-preload-829000" ...
	I0520 05:07:33.109639   22976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2f:7b:d5:12:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:33.111643   22976 main.go:141] libmachine: STDOUT: 
	I0520 05:07:33.111664   22976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:33.111692   22976 fix.go:56] duration metric: took 14.544459ms for fixHost
	I0520 05:07:33.111695   22976 start.go:83] releasing machines lock for "no-preload-829000", held for 14.558708ms
	W0520 05:07:33.111700   22976 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:33.111725   22976 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:33.111729   22976 start.go:728] Will try again in 5 seconds ...
	I0520 05:07:38.112088   22976 start.go:360] acquireMachinesLock for no-preload-829000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:38.112550   22976 start.go:364] duration metric: took 381.833µs to acquireMachinesLock for "no-preload-829000"
	I0520 05:07:38.112692   22976 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:07:38.112708   22976 fix.go:54] fixHost starting: 
	I0520 05:07:38.113240   22976 fix.go:112] recreateIfNeeded on no-preload-829000: state=Stopped err=<nil>
	W0520 05:07:38.113260   22976 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:07:38.117791   22976 out.go:177] * Restarting existing qemu2 VM for "no-preload-829000" ...
	I0520 05:07:38.125002   22976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2f:7b:d5:12:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/no-preload-829000/disk.qcow2
	I0520 05:07:38.132989   22976 main.go:141] libmachine: STDOUT: 
	I0520 05:07:38.133044   22976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:38.133116   22976 fix.go:56] duration metric: took 20.410125ms for fixHost
	I0520 05:07:38.133129   22976 start.go:83] releasing machines lock for "no-preload-829000", held for 20.562792ms
	W0520 05:07:38.133280   22976 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:38.141786   22976 out.go:177] 
	W0520 05:07:38.144798   22976 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:38.144814   22976 out.go:239] * 
	* 
	W0520 05:07:38.146552   22976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:38.154737   22976 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-829000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (57.022167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-829000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (30.7285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-829000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-829000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-829000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.667375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-829000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-829000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (28.915208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-829000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (28.773834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-829000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-829000 --alsologtostderr -v=1: exit status 83 (39.108083ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-829000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:38.404085   22997 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:38.404266   22997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:38.404272   22997 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:38.404274   22997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:38.404411   22997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:38.404654   22997 out.go:298] Setting JSON to false
	I0520 05:07:38.404660   22997 mustload.go:65] Loading cluster: no-preload-829000
	I0520 05:07:38.404870   22997 config.go:182] Loaded profile config "no-preload-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:38.407668   22997 out.go:177] * The control-plane node no-preload-829000 host is not running: state=Stopped
	I0520 05:07:38.411741   22997 out.go:177]   To start a cluster, run: "minikube start -p no-preload-829000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-829000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (28.267709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (28.134875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-068000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-068000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.92708175s)

                                                
                                                
-- stdout --
	* [embed-certs-068000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-068000" primary control-plane node in "embed-certs-068000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-068000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:38.890937   23024 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:38.891079   23024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:38.891082   23024 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:38.891084   23024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:38.891206   23024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:38.892243   23024 out.go:298] Setting JSON to false
	I0520 05:07:38.909741   23024 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11229,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:38.909828   23024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:38.912778   23024 out.go:177] * [embed-certs-068000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:38.917034   23024 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:38.920787   23024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:38.917126   23024 notify.go:220] Checking for updates...
	I0520 05:07:38.926776   23024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:38.934820   23024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:38.941822   23024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:38.948869   23024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:38.957170   23024 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:38.957218   23024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:38.964818   23024 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:07:38.971843   23024 start.go:297] selected driver: qemu2
	I0520 05:07:38.971854   23024 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:07:38.971862   23024 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:38.974196   23024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:07:38.976859   23024 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:07:38.979911   23024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:38.979927   23024 cni.go:84] Creating CNI manager for ""
	I0520 05:07:38.979933   23024 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:07:38.979939   23024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 05:07:38.979962   23024 start.go:340] cluster config:
	{Name:embed-certs-068000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:38.984560   23024 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:38.991875   23024 out.go:177] * Starting "embed-certs-068000" primary control-plane node in "embed-certs-068000" cluster
	I0520 05:07:38.995837   23024 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:07:38.995866   23024 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:07:38.995888   23024 cache.go:56] Caching tarball of preloaded images
	I0520 05:07:38.995960   23024 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:07:38.995972   23024 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:07:38.996038   23024 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/embed-certs-068000/config.json ...
	I0520 05:07:38.996051   23024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/embed-certs-068000/config.json: {Name:mk01ebe2f84ff4506240d9dd0bf34178dea23408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:07:38.996270   23024 start.go:360] acquireMachinesLock for embed-certs-068000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:38.996302   23024 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "embed-certs-068000"
	I0520 05:07:38.996313   23024 start.go:93] Provisioning new machine with config: &{Name:embed-certs-068000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:38.996338   23024 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:38.999837   23024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:39.015556   23024 start.go:159] libmachine.API.Create for "embed-certs-068000" (driver="qemu2")
	I0520 05:07:39.015589   23024 client.go:168] LocalClient.Create starting
	I0520 05:07:39.015666   23024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:39.015700   23024 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:39.015713   23024 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:39.015758   23024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:39.015783   23024 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:39.015791   23024 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:39.016175   23024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:39.205481   23024 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:39.363189   23024 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:39.363197   23024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:39.363367   23024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:39.375524   23024 main.go:141] libmachine: STDOUT: 
	I0520 05:07:39.375550   23024 main.go:141] libmachine: STDERR: 
	I0520 05:07:39.375603   23024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2 +20000M
	I0520 05:07:39.386330   23024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:39.386345   23024 main.go:141] libmachine: STDERR: 
	I0520 05:07:39.386357   23024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:39.386361   23024 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:39.386397   23024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:53:a6:19:2c:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:39.388063   23024 main.go:141] libmachine: STDOUT: 
	I0520 05:07:39.388081   23024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:39.388099   23024 client.go:171] duration metric: took 372.505709ms to LocalClient.Create
	I0520 05:07:41.390303   23024 start.go:128] duration metric: took 2.393956917s to createHost
	I0520 05:07:41.390395   23024 start.go:83] releasing machines lock for "embed-certs-068000", held for 2.394068084s
	W0520 05:07:41.390537   23024 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:41.406651   23024 out.go:177] * Deleting "embed-certs-068000" in qemu2 ...
	W0520 05:07:41.427663   23024 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:41.427684   23024 start.go:728] Will try again in 5 seconds ...
	I0520 05:07:46.429886   23024 start.go:360] acquireMachinesLock for embed-certs-068000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:46.430320   23024 start.go:364] duration metric: took 334.833µs to acquireMachinesLock for "embed-certs-068000"
	I0520 05:07:46.430461   23024 start.go:93] Provisioning new machine with config: &{Name:embed-certs-068000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:46.430753   23024 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:46.443224   23024 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:46.494857   23024 start.go:159] libmachine.API.Create for "embed-certs-068000" (driver="qemu2")
	I0520 05:07:46.494909   23024 client.go:168] LocalClient.Create starting
	I0520 05:07:46.495048   23024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:46.495108   23024 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:46.495130   23024 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:46.495206   23024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:46.495249   23024 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:46.495264   23024 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:46.495876   23024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:46.641995   23024 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:46.715090   23024 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:46.715096   23024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:46.715290   23024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:46.727863   23024 main.go:141] libmachine: STDOUT: 
	I0520 05:07:46.727884   23024 main.go:141] libmachine: STDERR: 
	I0520 05:07:46.727940   23024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2 +20000M
	I0520 05:07:46.738758   23024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:46.738775   23024 main.go:141] libmachine: STDERR: 
	I0520 05:07:46.738786   23024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:46.738792   23024 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:46.738835   23024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:56:87:c3:2c:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:46.740540   23024 main.go:141] libmachine: STDOUT: 
	I0520 05:07:46.740556   23024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:46.740568   23024 client.go:171] duration metric: took 245.657208ms to LocalClient.Create
	I0520 05:07:48.742728   23024 start.go:128] duration metric: took 2.311963375s to createHost
	I0520 05:07:48.742787   23024 start.go:83] releasing machines lock for "embed-certs-068000", held for 2.312461417s
	W0520 05:07:48.743203   23024 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-068000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-068000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:48.756826   23024 out.go:177] 
	W0520 05:07:48.761014   23024 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:48.761059   23024 out.go:239] * 
	* 
	W0520 05:07:48.763513   23024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:48.773891   23024 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-068000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (64.466708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-128000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-128000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (12.10548075s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-128000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-128000" primary control-plane node in "default-k8s-diff-port-128000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-128000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:39.217460   23043 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:39.217591   23043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:39.217595   23043 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:39.217597   23043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:39.217743   23043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:39.218821   23043 out.go:298] Setting JSON to false
	I0520 05:07:39.235515   23043 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11230,"bootTime":1716195629,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:39.235637   23043 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:39.240829   23043 out.go:177] * [default-k8s-diff-port-128000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:39.250741   23043 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:39.261447   23043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:39.250826   23043 notify.go:220] Checking for updates...
	I0520 05:07:39.266326   23043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:39.268769   23043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:39.271815   23043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:39.274879   23043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:39.278150   23043 config.go:182] Loaded profile config "embed-certs-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:39.278206   23043 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:39.278251   23043 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:39.282828   23043 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:07:39.289855   23043 start.go:297] selected driver: qemu2
	I0520 05:07:39.289861   23043 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:07:39.289866   23043 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:39.292055   23043 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 05:07:39.294843   23043 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:07:39.297922   23043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:39.297937   23043 cni.go:84] Creating CNI manager for ""
	I0520 05:07:39.297942   23043 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:07:39.297945   23043 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 05:07:39.297974   23043 start.go:340] cluster config:
	{Name:default-k8s-diff-port-128000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:39.302092   23043 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:39.309845   23043 out.go:177] * Starting "default-k8s-diff-port-128000" primary control-plane node in "default-k8s-diff-port-128000" cluster
	I0520 05:07:39.312830   23043 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:07:39.312842   23043 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:07:39.312850   23043 cache.go:56] Caching tarball of preloaded images
	I0520 05:07:39.312896   23043 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:07:39.312900   23043 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:07:39.312945   23043 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/default-k8s-diff-port-128000/config.json ...
	I0520 05:07:39.312955   23043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/default-k8s-diff-port-128000/config.json: {Name:mk72b4940a92315066ac90a499b716860dbee13e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:07:39.313212   23043 start.go:360] acquireMachinesLock for default-k8s-diff-port-128000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:41.390638   23043 start.go:364] duration metric: took 2.077400542s to acquireMachinesLock for "default-k8s-diff-port-128000"
	I0520 05:07:41.390788   23043 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:41.391030   23043 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:41.400632   23043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:41.451606   23043 start.go:159] libmachine.API.Create for "default-k8s-diff-port-128000" (driver="qemu2")
	I0520 05:07:41.451656   23043 client.go:168] LocalClient.Create starting
	I0520 05:07:41.451801   23043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:41.451858   23043 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:41.451884   23043 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:41.451957   23043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:41.452000   23043 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:41.452014   23043 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:41.452748   23043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:41.603602   23043 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:41.778069   23043 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:41.778079   23043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:41.778308   23043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:07:41.791774   23043 main.go:141] libmachine: STDOUT: 
	I0520 05:07:41.791796   23043 main.go:141] libmachine: STDERR: 
	I0520 05:07:41.791867   23043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2 +20000M
	I0520 05:07:41.803143   23043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:41.803159   23043 main.go:141] libmachine: STDERR: 
	I0520 05:07:41.803178   23043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:07:41.803182   23043 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:41.803211   23043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ec:0d:89:da:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:07:41.804948   23043 main.go:141] libmachine: STDOUT: 
	I0520 05:07:41.804964   23043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:41.804982   23043 client.go:171] duration metric: took 353.320959ms to LocalClient.Create
	I0520 05:07:43.807103   23043 start.go:128] duration metric: took 2.416065792s to createHost
	I0520 05:07:43.807141   23043 start.go:83] releasing machines lock for "default-k8s-diff-port-128000", held for 2.416479959s
	W0520 05:07:43.807183   23043 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:43.819584   23043 out.go:177] * Deleting "default-k8s-diff-port-128000" in qemu2 ...
	W0520 05:07:43.843805   23043 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:43.843842   23043 start.go:728] Will try again in 5 seconds ...
	I0520 05:07:48.845902   23043 start.go:360] acquireMachinesLock for default-k8s-diff-port-128000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:48.846011   23043 start.go:364] duration metric: took 78.875µs to acquireMachinesLock for "default-k8s-diff-port-128000"
	I0520 05:07:48.846039   23043 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:48.846088   23043 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:48.850352   23043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:48.869656   23043 start.go:159] libmachine.API.Create for "default-k8s-diff-port-128000" (driver="qemu2")
	I0520 05:07:48.869690   23043 client.go:168] LocalClient.Create starting
	I0520 05:07:48.869761   23043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:48.869796   23043 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:48.869811   23043 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:48.869856   23043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:48.869874   23043 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:48.869881   23043 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:48.870186   23043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:49.048892   23043 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:49.220571   23043 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:49.224732   23043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:49.225009   23043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:07:49.237615   23043 main.go:141] libmachine: STDOUT: 
	I0520 05:07:49.237642   23043 main.go:141] libmachine: STDERR: 
	I0520 05:07:49.237707   23043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2 +20000M
	I0520 05:07:49.248650   23043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:49.248668   23043 main.go:141] libmachine: STDERR: 
	I0520 05:07:49.248686   23043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:07:49.248693   23043 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:49.248739   23043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:25:04:a7:85:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:07:49.250485   23043 main.go:141] libmachine: STDOUT: 
	I0520 05:07:49.250500   23043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:49.250514   23043 client.go:171] duration metric: took 380.822834ms to LocalClient.Create
	I0520 05:07:51.252690   23043 start.go:128] duration metric: took 2.406596334s to createHost
	I0520 05:07:51.252758   23043 start.go:83] releasing machines lock for "default-k8s-diff-port-128000", held for 2.406751167s
	W0520 05:07:51.253225   23043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-128000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-128000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:51.261829   23043 out.go:177] 
	W0520 05:07:51.265900   23043 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:51.265922   23043 out.go:239] * 
	* 
	W0520 05:07:51.268704   23043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:51.277879   23043 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-128000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (67.186084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-068000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-068000 create -f testdata/busybox.yaml: exit status 1 (31.362541ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-068000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-068000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (32.273708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (32.493708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-068000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-068000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-068000 describe deploy/metrics-server -n kube-system: exit status 1 (29.637167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-068000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-068000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (29.383292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-128000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-128000 create -f testdata/busybox.yaml: exit status 1 (30.226458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-128000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (27.54975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (28.082125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-128000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-128000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-128000 describe deploy/metrics-server -n kube-system: exit status 1 (27.093375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-128000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (27.647417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-068000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-068000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.179190584s)

                                                
                                                
-- stdout --
	* [embed-certs-068000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-068000" primary control-plane node in "embed-certs-068000" cluster
	* Restarting existing qemu2 VM for "embed-certs-068000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-068000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:52.154220   23118 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:52.154331   23118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:52.154335   23118 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:52.154338   23118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:52.154467   23118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:52.155465   23118 out.go:298] Setting JSON to false
	I0520 05:07:52.171418   23118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11243,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:52.171488   23118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:52.176677   23118 out.go:177] * [embed-certs-068000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:52.183670   23118 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:52.183713   23118 notify.go:220] Checking for updates...
	I0520 05:07:52.187657   23118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:52.190674   23118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:52.193741   23118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:52.196662   23118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:52.199686   23118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:52.202885   23118 config.go:182] Loaded profile config "embed-certs-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:52.203141   23118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:52.207663   23118 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 05:07:52.214668   23118 start.go:297] selected driver: qemu2
	I0520 05:07:52.214677   23118 start.go:901] validating driver "qemu2" against &{Name:embed-certs-068000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:embed-certs-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:52.214750   23118 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:52.217074   23118 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:52.217099   23118 cni.go:84] Creating CNI manager for ""
	I0520 05:07:52.217106   23118 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:07:52.217131   23118 start.go:340] cluster config:
	{Name:embed-certs-068000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-068000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:52.221466   23118 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:52.228637   23118 out.go:177] * Starting "embed-certs-068000" primary control-plane node in "embed-certs-068000" cluster
	I0520 05:07:52.232651   23118 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:07:52.232666   23118 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:07:52.232678   23118 cache.go:56] Caching tarball of preloaded images
	I0520 05:07:52.232730   23118 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:07:52.232735   23118 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:07:52.232792   23118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/embed-certs-068000/config.json ...
	I0520 05:07:52.233217   23118 start.go:360] acquireMachinesLock for embed-certs-068000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:52.233244   23118 start.go:364] duration metric: took 21µs to acquireMachinesLock for "embed-certs-068000"
	I0520 05:07:52.233253   23118 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:07:52.233259   23118 fix.go:54] fixHost starting: 
	I0520 05:07:52.233378   23118 fix.go:112] recreateIfNeeded on embed-certs-068000: state=Stopped err=<nil>
	W0520 05:07:52.233386   23118 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:07:52.238676   23118 out.go:177] * Restarting existing qemu2 VM for "embed-certs-068000" ...
	I0520 05:07:52.242705   23118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:56:87:c3:2c:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:52.244772   23118 main.go:141] libmachine: STDOUT: 
	I0520 05:07:52.244798   23118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:52.244826   23118 fix.go:56] duration metric: took 11.567458ms for fixHost
	I0520 05:07:52.244830   23118 start.go:83] releasing machines lock for "embed-certs-068000", held for 11.582125ms
	W0520 05:07:52.244836   23118 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:52.244868   23118 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:52.244873   23118 start.go:728] Will try again in 5 seconds ...
	I0520 05:07:57.246982   23118 start.go:360] acquireMachinesLock for embed-certs-068000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:57.247344   23118 start.go:364] duration metric: took 277.958µs to acquireMachinesLock for "embed-certs-068000"
	I0520 05:07:57.247501   23118 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:07:57.247520   23118 fix.go:54] fixHost starting: 
	I0520 05:07:57.248208   23118 fix.go:112] recreateIfNeeded on embed-certs-068000: state=Stopped err=<nil>
	W0520 05:07:57.248243   23118 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:07:57.252838   23118 out.go:177] * Restarting existing qemu2 VM for "embed-certs-068000" ...
	I0520 05:07:57.260808   23118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:56:87:c3:2c:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/embed-certs-068000/disk.qcow2
	I0520 05:07:57.270093   23118 main.go:141] libmachine: STDOUT: 
	I0520 05:07:57.270162   23118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:57.270243   23118 fix.go:56] duration metric: took 22.726292ms for fixHost
	I0520 05:07:57.270259   23118 start.go:83] releasing machines lock for "embed-certs-068000", held for 22.89375ms
	W0520 05:07:57.270434   23118 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-068000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-068000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:57.277632   23118 out.go:177] 
	W0520 05:07:57.281712   23118 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:57.281754   23118 out.go:239] * 
	* 
	W0520 05:07:57.284119   23118 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:07:57.292504   23118 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-068000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (65.653708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-128000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-128000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.566878083s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-128000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-128000" primary control-plane node in "default-k8s-diff-port-128000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-128000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-128000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:54.973244   23141 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:54.973375   23141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:54.973379   23141 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:54.973381   23141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:54.973568   23141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:54.974542   23141 out.go:298] Setting JSON to false
	I0520 05:07:54.990481   23141 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11245,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:54.990542   23141 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:54.995592   23141 out.go:177] * [default-k8s-diff-port-128000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:55.002441   23141 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:55.006586   23141 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:55.002507   23141 notify.go:220] Checking for updates...
	I0520 05:07:55.010624   23141 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:55.013579   23141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:55.016580   23141 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:55.019635   23141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:55.021180   23141 config.go:182] Loaded profile config "default-k8s-diff-port-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:55.021434   23141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:55.025617   23141 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 05:07:55.032441   23141 start.go:297] selected driver: qemu2
	I0520 05:07:55.032446   23141 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-128000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:55.032489   23141 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:55.034742   23141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 05:07:55.034768   23141 cni.go:84] Creating CNI manager for ""
	I0520 05:07:55.034775   23141 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:07:55.034797   23141 start.go:340] cluster config:
	{Name:default-k8s-diff-port-128000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-128000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:55.039027   23141 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:55.046624   23141 out.go:177] * Starting "default-k8s-diff-port-128000" primary control-plane node in "default-k8s-diff-port-128000" cluster
	I0520 05:07:55.050579   23141 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:07:55.050594   23141 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:07:55.050612   23141 cache.go:56] Caching tarball of preloaded images
	I0520 05:07:55.050664   23141 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:07:55.050669   23141 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:07:55.050748   23141 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/default-k8s-diff-port-128000/config.json ...
	I0520 05:07:55.051164   23141 start.go:360] acquireMachinesLock for default-k8s-diff-port-128000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:55.051193   23141 start.go:364] duration metric: took 23.166µs to acquireMachinesLock for "default-k8s-diff-port-128000"
	I0520 05:07:55.051202   23141 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:07:55.051208   23141 fix.go:54] fixHost starting: 
	I0520 05:07:55.051322   23141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128000: state=Stopped err=<nil>
	W0520 05:07:55.051330   23141 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:07:55.055657   23141 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-128000" ...
	I0520 05:07:55.063601   23141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:25:04:a7:85:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:07:55.065596   23141 main.go:141] libmachine: STDOUT: 
	I0520 05:07:55.065621   23141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:55.065652   23141 fix.go:56] duration metric: took 14.44375ms for fixHost
	I0520 05:07:55.065656   23141 start.go:83] releasing machines lock for "default-k8s-diff-port-128000", held for 14.458875ms
	W0520 05:07:55.065662   23141 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:07:55.065695   23141 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:07:55.065699   23141 start.go:728] Will try again in 5 seconds ...
	I0520 05:08:00.067928   23141 start.go:360] acquireMachinesLock for default-k8s-diff-port-128000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:08:00.445130   23141 start.go:364] duration metric: took 377.074959ms to acquireMachinesLock for "default-k8s-diff-port-128000"
	I0520 05:08:00.445260   23141 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:08:00.445299   23141 fix.go:54] fixHost starting: 
	I0520 05:08:00.446042   23141 fix.go:112] recreateIfNeeded on default-k8s-diff-port-128000: state=Stopped err=<nil>
	W0520 05:08:00.446070   23141 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:08:00.455353   23141 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-128000" ...
	I0520 05:08:00.468696   23141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:25:04:a7:85:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/default-k8s-diff-port-128000/disk.qcow2
	I0520 05:08:00.478416   23141 main.go:141] libmachine: STDOUT: 
	I0520 05:08:00.478485   23141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:08:00.478568   23141 fix.go:56] duration metric: took 33.275666ms for fixHost
	I0520 05:08:00.478585   23141 start.go:83] releasing machines lock for "default-k8s-diff-port-128000", held for 33.432458ms
	W0520 05:08:00.478775   23141 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-128000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-128000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:08:00.487420   23141 out.go:177] 
	W0520 05:08:00.490475   23141 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:08:00.490494   23141 out.go:239] * 
	* 
	W0520 05:08:00.492405   23141 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:08:00.502455   23141 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-128000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (61.684084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-068000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (31.49ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-068000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-068000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-068000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.514167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-068000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-068000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (28.179875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-068000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (28.033334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-068000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-068000 --alsologtostderr -v=1: exit status 83 (39.371916ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-068000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-068000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:57.554945   23160 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:57.555106   23160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:57.555109   23160 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:57.555111   23160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:57.555244   23160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:57.555458   23160 out.go:298] Setting JSON to false
	I0520 05:07:57.555465   23160 mustload.go:65] Loading cluster: embed-certs-068000
	I0520 05:07:57.555650   23160 config.go:182] Loaded profile config "embed-certs-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:57.560279   23160 out.go:177] * The control-plane node embed-certs-068000 host is not running: state=Stopped
	I0520 05:07:57.564312   23160 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-068000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-068000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (27.776333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (28.098292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-939000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-939000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.955673625s)

                                                
                                                
-- stdout --
	* [newest-cni-939000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-939000" primary control-plane node in "newest-cni-939000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:07:58.000606   23183 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:07:58.000749   23183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:58.000752   23183 out.go:304] Setting ErrFile to fd 2...
	I0520 05:07:58.000754   23183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:07:58.000889   23183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:07:58.001974   23183 out.go:298] Setting JSON to false
	I0520 05:07:58.018050   23183 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11249,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:07:58.018167   23183 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:07:58.023017   23183 out.go:177] * [newest-cni-939000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:07:58.029963   23183 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:07:58.030023   23183 notify.go:220] Checking for updates...
	I0520 05:07:58.036907   23183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:07:58.039932   23183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:07:58.043002   23183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:07:58.045888   23183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:07:58.048925   23183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:07:58.052237   23183 config.go:182] Loaded profile config "default-k8s-diff-port-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:58.052297   23183 config.go:182] Loaded profile config "multinode-964000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:07:58.052346   23183 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:07:58.056828   23183 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 05:07:58.063886   23183 start.go:297] selected driver: qemu2
	I0520 05:07:58.063893   23183 start.go:901] validating driver "qemu2" against <nil>
	I0520 05:07:58.063899   23183 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:07:58.066252   23183 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0520 05:07:58.066275   23183 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0520 05:07:58.073862   23183 out.go:177] * Automatically selected the socket_vmnet network
	I0520 05:07:58.077014   23183 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 05:07:58.077027   23183 cni.go:84] Creating CNI manager for ""
	I0520 05:07:58.077034   23183 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:07:58.077038   23183 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 05:07:58.077090   23183 start.go:340] cluster config:
	{Name:newest-cni-939000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:07:58.081752   23183 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:07:58.088896   23183 out.go:177] * Starting "newest-cni-939000" primary control-plane node in "newest-cni-939000" cluster
	I0520 05:07:58.091843   23183 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:07:58.091858   23183 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:07:58.091864   23183 cache.go:56] Caching tarball of preloaded images
	I0520 05:07:58.091921   23183 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:07:58.091926   23183 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:07:58.091989   23183 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/newest-cni-939000/config.json ...
	I0520 05:07:58.092000   23183 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/newest-cni-939000/config.json: {Name:mk622f798cdab479d9d6d64bd7877678cd8e2364 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 05:07:58.092350   23183 start.go:360] acquireMachinesLock for newest-cni-939000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:07:58.092384   23183 start.go:364] duration metric: took 28.209µs to acquireMachinesLock for "newest-cni-939000"
	I0520 05:07:58.092396   23183 start.go:93] Provisioning new machine with config: &{Name:newest-cni-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:07:58.092434   23183 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:07:58.096957   23183 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:07:58.113971   23183 start.go:159] libmachine.API.Create for "newest-cni-939000" (driver="qemu2")
	I0520 05:07:58.114000   23183 client.go:168] LocalClient.Create starting
	I0520 05:07:58.114055   23183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:07:58.114084   23183 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:58.114094   23183 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:58.114133   23183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:07:58.114156   23183 main.go:141] libmachine: Decoding PEM data...
	I0520 05:07:58.114164   23183 main.go:141] libmachine: Parsing certificate...
	I0520 05:07:58.114533   23183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:07:58.247902   23183 main.go:141] libmachine: Creating SSH key...
	I0520 05:07:58.416815   23183 main.go:141] libmachine: Creating Disk image...
	I0520 05:07:58.416822   23183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:07:58.417008   23183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:07:58.430011   23183 main.go:141] libmachine: STDOUT: 
	I0520 05:07:58.430030   23183 main.go:141] libmachine: STDERR: 
	I0520 05:07:58.430079   23183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2 +20000M
	I0520 05:07:58.440994   23183 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:07:58.441014   23183 main.go:141] libmachine: STDERR: 
	I0520 05:07:58.441030   23183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:07:58.441034   23183 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:07:58.441076   23183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:5c:9a:c1:75:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:07:58.442769   23183 main.go:141] libmachine: STDOUT: 
	I0520 05:07:58.442786   23183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:07:58.442804   23183 client.go:171] duration metric: took 328.801041ms to LocalClient.Create
	I0520 05:08:00.444960   23183 start.go:128] duration metric: took 2.352523s to createHost
	I0520 05:08:00.445017   23183 start.go:83] releasing machines lock for "newest-cni-939000", held for 2.352638667s
	W0520 05:08:00.445062   23183 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:08:00.464437   23183 out.go:177] * Deleting "newest-cni-939000" in qemu2 ...
	W0520 05:08:00.510766   23183 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:08:00.510804   23183 start.go:728] Will try again in 5 seconds ...
	I0520 05:08:05.512944   23183 start.go:360] acquireMachinesLock for newest-cni-939000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:08:05.513507   23183 start.go:364] duration metric: took 449.958µs to acquireMachinesLock for "newest-cni-939000"
	I0520 05:08:05.513690   23183 start.go:93] Provisioning new machine with config: &{Name:newest-cni-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 05:08:05.514022   23183 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 05:08:05.524564   23183 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 05:08:05.578116   23183 start.go:159] libmachine.API.Create for "newest-cni-939000" (driver="qemu2")
	I0520 05:08:05.578169   23183 client.go:168] LocalClient.Create starting
	I0520 05:08:05.578295   23183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/ca.pem
	I0520 05:08:05.578364   23183 main.go:141] libmachine: Decoding PEM data...
	I0520 05:08:05.578380   23183 main.go:141] libmachine: Parsing certificate...
	I0520 05:08:05.578448   23183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18929-19024/.minikube/certs/cert.pem
	I0520 05:08:05.578494   23183 main.go:141] libmachine: Decoding PEM data...
	I0520 05:08:05.578509   23183 main.go:141] libmachine: Parsing certificate...
	I0520 05:08:05.579059   23183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 05:08:05.724856   23183 main.go:141] libmachine: Creating SSH key...
	I0520 05:08:05.860130   23183 main.go:141] libmachine: Creating Disk image...
	I0520 05:08:05.860137   23183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 05:08:05.860337   23183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:08:05.873236   23183 main.go:141] libmachine: STDOUT: 
	I0520 05:08:05.873260   23183 main.go:141] libmachine: STDERR: 
	I0520 05:08:05.873309   23183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2 +20000M
	I0520 05:08:05.884085   23183 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 05:08:05.884107   23183 main.go:141] libmachine: STDERR: 
	I0520 05:08:05.884116   23183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:08:05.884120   23183 main.go:141] libmachine: Starting QEMU VM...
	I0520 05:08:05.884152   23183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:01:14:c0:a9:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:08:05.885975   23183 main.go:141] libmachine: STDOUT: 
	I0520 05:08:05.885993   23183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:08:05.886014   23183 client.go:171] duration metric: took 307.834458ms to LocalClient.Create
	I0520 05:08:07.888179   23183 start.go:128] duration metric: took 2.374138167s to createHost
	I0520 05:08:07.888242   23183 start.go:83] releasing machines lock for "newest-cni-939000", held for 2.37470275s
	W0520 05:08:07.888550   23183 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:08:07.900196   23183 out.go:177] 
	W0520 05:08:07.903281   23183 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:08:07.903306   23183 out.go:239] * 
	* 
	W0520 05:08:07.906039   23183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:08:07.916193   23183 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-939000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000: exit status 7 (65.967417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-939000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-128000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (31.220959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-128000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.98775ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-128000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-128000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (28.370166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-128000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (27.964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-128000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-128000 --alsologtostderr -v=1: exit status 83 (41.308083ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-128000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-128000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:08:00.758757   23205 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:08:00.758914   23205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:08:00.758917   23205 out.go:304] Setting ErrFile to fd 2...
	I0520 05:08:00.758919   23205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:08:00.759039   23205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:08:00.759260   23205 out.go:298] Setting JSON to false
	I0520 05:08:00.759267   23205 mustload.go:65] Loading cluster: default-k8s-diff-port-128000
	I0520 05:08:00.759456   23205 config.go:182] Loaded profile config "default-k8s-diff-port-128000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:08:00.763982   23205 out.go:177] * The control-plane node default-k8s-diff-port-128000 host is not running: state=Stopped
	I0520 05:08:00.769162   23205 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-128000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-128000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (28.316458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (27.963625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-128000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-939000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-939000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.184125125s)

                                                
                                                
-- stdout --
	* [newest-cni-939000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-939000" primary control-plane node in "newest-cni-939000" cluster
	* Restarting existing qemu2 VM for "newest-cni-939000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-939000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:08:10.977328   23258 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:08:10.977439   23258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:08:10.977442   23258 out.go:304] Setting ErrFile to fd 2...
	I0520 05:08:10.977444   23258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:08:10.977563   23258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:08:10.978559   23258 out.go:298] Setting JSON to false
	I0520 05:08:10.994665   23258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":11261,"bootTime":1716195629,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 05:08:10.994733   23258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 05:08:11.000089   23258 out.go:177] * [newest-cni-939000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 05:08:11.007051   23258 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 05:08:11.011017   23258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 05:08:11.007080   23258 notify.go:220] Checking for updates...
	I0520 05:08:11.018006   23258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 05:08:11.021020   23258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 05:08:11.024004   23258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 05:08:11.027029   23258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 05:08:11.030397   23258 config.go:182] Loaded profile config "newest-cni-939000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:08:11.030678   23258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 05:08:11.034946   23258 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 05:08:11.042000   23258 start.go:297] selected driver: qemu2
	I0520 05:08:11.042007   23258 start.go:901] validating driver "qemu2" against &{Name:newest-cni-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:newest-cni-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:08:11.042055   23258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 05:08:11.044359   23258 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 05:08:11.044380   23258 cni.go:84] Creating CNI manager for ""
	I0520 05:08:11.044388   23258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 05:08:11.044410   23258 start.go:340] cluster config:
	{Name:newest-cni-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-939000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 05:08:11.048591   23258 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 05:08:11.055995   23258 out.go:177] * Starting "newest-cni-939000" primary control-plane node in "newest-cni-939000" cluster
	I0520 05:08:11.059929   23258 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 05:08:11.059942   23258 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 05:08:11.059951   23258 cache.go:56] Caching tarball of preloaded images
	I0520 05:08:11.059997   23258 preload.go:173] Found /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 05:08:11.060002   23258 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 05:08:11.060071   23258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/newest-cni-939000/config.json ...
	I0520 05:08:11.060469   23258 start.go:360] acquireMachinesLock for newest-cni-939000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:08:11.060512   23258 start.go:364] duration metric: took 37.042µs to acquireMachinesLock for "newest-cni-939000"
	I0520 05:08:11.060521   23258 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:08:11.060529   23258 fix.go:54] fixHost starting: 
	I0520 05:08:11.060648   23258 fix.go:112] recreateIfNeeded on newest-cni-939000: state=Stopped err=<nil>
	W0520 05:08:11.060655   23258 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:08:11.065045   23258 out.go:177] * Restarting existing qemu2 VM for "newest-cni-939000" ...
	I0520 05:08:11.072958   23258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:01:14:c0:a9:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:08:11.074923   23258 main.go:141] libmachine: STDOUT: 
	I0520 05:08:11.074944   23258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:08:11.074974   23258 fix.go:56] duration metric: took 14.446375ms for fixHost
	I0520 05:08:11.074980   23258 start.go:83] releasing machines lock for "newest-cni-939000", held for 14.4635ms
	W0520 05:08:11.074984   23258 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:08:11.075016   23258 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:08:11.075021   23258 start.go:728] Will try again in 5 seconds ...
	I0520 05:08:16.077144   23258 start.go:360] acquireMachinesLock for newest-cni-939000: {Name:mk62c5fa095f6d36fc09e8de32f88d12eecc49ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 05:08:16.077513   23258 start.go:364] duration metric: took 283.709µs to acquireMachinesLock for "newest-cni-939000"
	I0520 05:08:16.077613   23258 start.go:96] Skipping create...Using existing machine configuration
	I0520 05:08:16.077629   23258 fix.go:54] fixHost starting: 
	I0520 05:08:16.078297   23258 fix.go:112] recreateIfNeeded on newest-cni-939000: state=Stopped err=<nil>
	W0520 05:08:16.078324   23258 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 05:08:16.082753   23258 out.go:177] * Restarting existing qemu2 VM for "newest-cni-939000" ...
	I0520 05:08:16.089953   23258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:01:14:c0:a9:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18929-19024/.minikube/machines/newest-cni-939000/disk.qcow2
	I0520 05:08:16.098724   23258 main.go:141] libmachine: STDOUT: 
	I0520 05:08:16.098803   23258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 05:08:16.098895   23258 fix.go:56] duration metric: took 21.264792ms for fixHost
	I0520 05:08:16.098953   23258 start.go:83] releasing machines lock for "newest-cni-939000", held for 21.415167ms
	W0520 05:08:16.099171   23258 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-939000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-939000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 05:08:16.106613   23258 out.go:177] 
	W0520 05:08:16.110716   23258 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 05:08:16.110752   23258 out.go:239] * 
	* 
	W0520 05:08:16.113362   23258 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 05:08:16.120521   23258 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-939000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000: exit status 7 (68.032917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-939000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-939000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000: exit status 7 (29.608333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-939000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-939000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-939000 --alsologtostderr -v=1: exit status 83 (41.928417ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-939000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-939000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 05:08:16.304863   23272 out.go:291] Setting OutFile to fd 1 ...
	I0520 05:08:16.305027   23272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:08:16.305030   23272 out.go:304] Setting ErrFile to fd 2...
	I0520 05:08:16.305032   23272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 05:08:16.305169   23272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 05:08:16.305410   23272 out.go:298] Setting JSON to false
	I0520 05:08:16.305416   23272 mustload.go:65] Loading cluster: newest-cni-939000
	I0520 05:08:16.305622   23272 config.go:182] Loaded profile config "newest-cni-939000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 05:08:16.310591   23272 out.go:177] * The control-plane node newest-cni-939000 host is not running: state=Stopped
	I0520 05:08:16.314530   23272 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-939000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-939000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000: exit status 7 (29.719083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-939000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000: exit status 7 (29.11125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-939000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.30.1/json-events 6.33
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.08
18 TestDownloadOnly/v1.30.1/DeleteAll 0.23
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.33
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.32
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 8.49
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.68
55 TestFunctional/serial/CacheCmd/cache/add_local 1.36
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.22
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.27
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
107 TestFunctional/parallel/ProfileCmd/profile_list 0.1
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 1.44
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.12
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.16
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.94
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.04
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.44
258 TestNoKubernetes/serial/Stop 3.34
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.64
275 TestStartStop/group/old-k8s-version/serial/Stop 3.29
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
286 TestStartStop/group/no-preload/serial/Stop 3.43
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
299 TestStartStop/group/embed-certs/serial/Stop 2.92
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.27
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 2.77
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-533000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-533000: exit status 85 (92.997792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:41 PDT |          |
	|         | -p download-only-533000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:41:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:41:59.158800   19519 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:59.158945   19519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:59.158948   19519 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:59.158951   19519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:59.159075   19519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	W0520 04:41:59.159156   19519 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18929-19024/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18929-19024/.minikube/config/config.json: no such file or directory
	I0520 04:41:59.160417   19519 out.go:298] Setting JSON to true
	I0520 04:41:59.176836   19519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9690,"bootTime":1716195629,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:41:59.176896   19519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:41:59.182296   19519 out.go:97] [download-only-533000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:41:59.185430   19519 out.go:169] MINIKUBE_LOCATION=18929
	I0520 04:41:59.182431   19519 notify.go:220] Checking for updates...
	W0520 04:41:59.182460   19519 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 04:41:59.193235   19519 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:41:59.196374   19519 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:41:59.199380   19519 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:41:59.203231   19519 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	W0520 04:41:59.209346   19519 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 04:41:59.209548   19519 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:41:59.212327   19519 out.go:97] Using the qemu2 driver based on user configuration
	I0520 04:41:59.212344   19519 start.go:297] selected driver: qemu2
	I0520 04:41:59.212358   19519 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:41:59.212410   19519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:41:59.215312   19519 out.go:169] Automatically selected the socket_vmnet network
	I0520 04:41:59.220569   19519 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 04:41:59.220664   19519 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:41:59.220690   19519 cni.go:84] Creating CNI manager for ""
	I0520 04:41:59.220708   19519 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:41:59.220763   19519 start.go:340] cluster config:
	{Name:download-only-533000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-533000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:59.225552   19519 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:41:59.229364   19519 out.go:97] Downloading VM boot image ...
	I0520 04:41:59.229381   19519 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso
	I0520 04:42:03.588366   19519 out.go:97] Starting "download-only-533000" primary control-plane node in "download-only-533000" cluster
	I0520 04:42:03.588396   19519 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:42:03.646497   19519 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:42:03.646524   19519 cache.go:56] Caching tarball of preloaded images
	I0520 04:42:03.647520   19519 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:42:03.650863   19519 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 04:42:03.650870   19519 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:03.727723   19519 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:42:08.983381   19519 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:08.983544   19519 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:09.680281   19519 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:42:09.680488   19519 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/download-only-533000/config.json ...
	I0520 04:42:09.680508   19519 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/download-only-533000/config.json: {Name:mkc4239b44e2dd244cc9a8aca81a5ab2bee270c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:42:09.681805   19519 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:42:09.682003   19519 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0520 04:42:10.063849   19519 out.go:169] 
	W0520 04:42:10.068876   19519 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106469380 0x106469380 0x106469380 0x106469380 0x106469380 0x106469380 0x106469380] Decompressors:map[bz2:0x1400045e5b0 gz:0x1400045e5b8 tar:0x1400045e4b0 tar.bz2:0x1400045e4e0 tar.gz:0x1400045e500 tar.xz:0x1400045e540 tar.zst:0x1400045e560 tbz2:0x1400045e4e0 tgz:0x1400045e500 txz:0x1400045e540 tzst:0x1400045e560 xz:0x1400045e5c0 zip:0x1400045e5d0 zst:0x1400045e5c8] Getters:map[file:0x1400072dc70 http:0x14000898460 https:0x140008984b0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 04:42:10.068898   19519 out_reason.go:110] 
	W0520 04:42:10.076817   19519 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:42:10.080826   19519 out.go:169] 
	
	
	* The control-plane node download-only-533000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-533000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-533000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (6.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-341000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-341000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 : (6.331850375s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (6.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-341000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-341000: exit status 85 (76.84775ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:41 PDT |                     |
	|         | -p download-only-533000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| delete  | -p download-only-533000        | download-only-533000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT | 20 May 24 04:42 PDT |
	| start   | -o=json --download-only        | download-only-341000 | jenkins | v1.33.1 | 20 May 24 04:42 PDT |                     |
	|         | -p download-only-341000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:42:10
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:42:10.738069   19555 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:42:10.738184   19555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:42:10.738187   19555 out.go:304] Setting ErrFile to fd 2...
	I0520 04:42:10.738189   19555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:42:10.738346   19555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:42:10.739438   19555 out.go:298] Setting JSON to true
	I0520 04:42:10.755477   19555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9701,"bootTime":1716195629,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:42:10.755544   19555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:42:10.760401   19555 out.go:97] [download-only-341000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:42:10.764358   19555 out.go:169] MINIKUBE_LOCATION=18929
	I0520 04:42:10.760506   19555 notify.go:220] Checking for updates...
	I0520 04:42:10.771339   19555 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:42:10.774389   19555 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:42:10.777412   19555 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:42:10.780378   19555 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	W0520 04:42:10.786412   19555 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 04:42:10.786621   19555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:42:10.789369   19555 out.go:97] Using the qemu2 driver based on user configuration
	I0520 04:42:10.789379   19555 start.go:297] selected driver: qemu2
	I0520 04:42:10.789382   19555 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:42:10.789430   19555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:42:10.792337   19555 out.go:169] Automatically selected the socket_vmnet network
	I0520 04:42:10.797492   19555 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 04:42:10.797586   19555 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:42:10.797603   19555 cni.go:84] Creating CNI manager for ""
	I0520 04:42:10.797613   19555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:42:10.797619   19555 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:42:10.797683   19555 start.go:340] cluster config:
	{Name:download-only-341000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:42:10.801815   19555 iso.go:125] acquiring lock: {Name:mkd2d74aea60f57e68424d46800e51dabd4dfb03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:42:10.804369   19555 out.go:97] Starting "download-only-341000" primary control-plane node in "download-only-341000" cluster
	I0520 04:42:10.804377   19555 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:42:10.859763   19555 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:42:10.859780   19555 cache.go:56] Caching tarball of preloaded images
	I0520 04:42:10.859943   19555 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:42:10.865080   19555 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 04:42:10.865087   19555 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:10.936471   19555 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:42:15.146366   19555 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:15.147022   19555 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:42:15.689225   19555 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:42:15.689442   19555 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/download-only-341000/config.json ...
	I0520 04:42:15.689459   19555 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18929-19024/.minikube/profiles/download-only-341000/config.json: {Name:mk74158886d0ea9533ea124876a9ef02bdedb401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:42:15.689695   19555 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:42:15.690695   19555 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18929-19024/.minikube/cache/darwin/arm64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-341000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-341000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-341000
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-746000 --alsologtostderr --binary-mirror http://127.0.0.1:53721 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-746000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-746000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-892000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-892000: exit status 85 (57.275083ms)

                                                
                                                
-- stdout --
	* Profile "addons-892000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-892000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-892000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-892000: exit status 85 (61.095667ms)

                                                
                                                
-- stdout --
	* Profile "addons-892000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-892000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.32s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status: exit status 7 (30.228125ms)

                                                
                                                
-- stdout --
	nospam-804000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status: exit status 7 (29.34275ms)

                                                
                                                
-- stdout --
	nospam-804000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status: exit status 7 (29.909083ms)

                                                
                                                
-- stdout --
	nospam-804000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause: exit status 83 (37.642417ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-804000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause: exit status 83 (39.093584ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-804000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause: exit status 83 (39.665708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-804000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause: exit status 83 (38.895292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-804000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause: exit status 83 (41.24ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-804000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause: exit status 83 (41.225791ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-804000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 stop: (2.078779583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 stop: (3.172369625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-804000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-804000 stop: (3.240418917s)
--- PASS: TestErrorSpam/stop (8.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18929-19024/.minikube/files/etc/test/nested/copy/19517/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2874187487/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cache add minikube-local-cache-test:functional-832000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 cache delete minikube-local-cache-test:functional-832000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-832000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 config get cpus: exit status 14 (28.481709ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 config get cpus: exit status 14 (35.249125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-832000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-832000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.791458ms)

                                                
                                                
-- stdout --
	* [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:43:53.982992   20048 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:43:53.983140   20048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:43:53.983143   20048 out.go:304] Setting ErrFile to fd 2...
	I0520 04:43:53.983145   20048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:43:53.983264   20048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:43:53.984260   20048 out.go:298] Setting JSON to false
	I0520 04:43:54.000246   20048 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9804,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:43:54.000323   20048 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:43:54.005560   20048 out.go:177] * [functional-832000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:43:54.010509   20048 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:43:54.010571   20048 notify.go:220] Checking for updates...
	I0520 04:43:54.015753   20048 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:43:54.018474   20048 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:43:54.021543   20048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:43:54.022786   20048 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:43:54.025517   20048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:43:54.028815   20048 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:43:54.029087   20048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:43:54.033395   20048 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:43:54.040479   20048 start.go:297] selected driver: qemu2
	I0520 04:43:54.040488   20048 start.go:901] validating driver "qemu2" against &{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:43:54.040544   20048 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:43:54.046452   20048 out.go:177] 
	W0520 04:43:54.050540   20048 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 04:43:54.053492   20048 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-832000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-832000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-832000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.627917ms)

                                                
                                                
-- stdout --
	* [functional-832000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:43:53.866172   20044 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:43:53.866274   20044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:43:53.866277   20044 out.go:304] Setting ErrFile to fd 2...
	I0520 04:43:53.866279   20044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:43:53.866409   20044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18929-19024/.minikube/bin
	I0520 04:43:53.867835   20044 out.go:298] Setting JSON to false
	I0520 04:43:53.884496   20044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9804,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:43:53.884603   20044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:43:53.889571   20044 out.go:177] * [functional-832000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0520 04:43:53.896621   20044 out.go:177]   - MINIKUBE_LOCATION=18929
	I0520 04:43:53.896683   20044 notify.go:220] Checking for updates...
	I0520 04:43:53.904484   20044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	I0520 04:43:53.908524   20044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:43:53.911481   20044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:43:53.914499   20044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	I0520 04:43:53.917550   20044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:43:53.920831   20044 config.go:182] Loaded profile config "functional-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:43:53.921081   20044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:43:53.925407   20044 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0520 04:43:53.932521   20044 start.go:297] selected driver: qemu2
	I0520 04:43:53.932529   20044 start.go:901] validating driver "qemu2" against &{Name:functional-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:43:53.932600   20044 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:43:53.938471   20044 out.go:177] 
	W0520 04:43:53.942294   20044 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 04:43:53.946494   20044 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "68.129791ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "31.618208ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "68.590208ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.468666ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.405983958s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-832000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image rm gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-832000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 image save --daemon gcr.io/google-containers/addon-resizer:functional-832000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-832000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011165083s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-832000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-832000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-832000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-832000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-086000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-086000 --output=json --user=testUser: (2.944436291s)
--- PASS: TestJSONOutput/stop/Command (2.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-521000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-521000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.963792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c9e42bc-333b-48c1-9a17-229d8408182c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-521000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"159fda11-baaf-4d52-83b2-5acda99d54b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18929"}}
	{"specversion":"1.0","id":"d940bb23-d553-4d49-b862-f59329e667f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig"}}
	{"specversion":"1.0","id":"f4e65794-ea2d-4f22-8e0f-56b1901670e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a59001dd-d661-4762-b2c8-6b36962530f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"23f30913-4dfa-4a95-8e0a-e586279588bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube"}}
	{"specversion":"1.0","id":"f34e93ac-23f2-459c-846d-0d603e4be33b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8aa12dd5-7d9e-4c8c-b7c3-ebb1eaecba82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-521000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-384000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.811333ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-384000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18929
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18929-19024/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18929-19024/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-384000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-384000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.55075ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-384000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.68164275s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.758430625s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-384000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-384000: (3.337463916s)
--- PASS: TestNoKubernetes/serial/Stop (3.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-384000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-384000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.15175ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-384000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-298000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-593000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-593000 --alsologtostderr -v=3: (3.292281666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-593000 -n old-k8s-version-593000: exit status 7 (29.063459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-593000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-829000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-829000 --alsologtostderr -v=3: (3.427722417s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-829000 -n no-preload-829000: exit status 7 (42.349708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-829000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-068000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-068000 --alsologtostderr -v=3: (2.918791584s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-128000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-128000 --alsologtostderr -v=3: (3.269119584s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-068000 -n embed-certs-068000: exit status 7 (52.588792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-068000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-128000 -n default-k8s-diff-port-128000: exit status 7 (55.788583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-128000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-939000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-939000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-939000 --alsologtostderr -v=3: (2.772686625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-939000 -n newest-cni-939000: exit status 7 (53.153042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-939000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1115475253/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716205395057827000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1115475253/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716205395057827000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1115475253/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716205395057827000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1115475253/001/test-1716205395057827000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (56.729584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.394333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.295959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.795833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.34375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.819084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.845708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo umount -f /mount-9p": exit status 83 (46.615875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1115475253/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1120762510/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.421209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.382583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.483667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.545916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.763208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.897541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.459125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.16875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "sudo umount -f /mount-9p": exit status 83 (50.63525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-832000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1120762510/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup779189337/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup779189337/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup779189337/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1: exit status 83 (83.271459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1: exit status 83 (84.772333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1: exit status 83 (85.78375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1: exit status 83 (86.479833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1: exit status 83 (85.419542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1: exit status 83 (86.042041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-832000 ssh "findmnt -T" /mount1: exit status 83 (85.333041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-832000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup779189337/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup779189337/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-832000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup779189337/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.62s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-458000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-458000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-458000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-458000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458000"

                                                
                                                
----------------------- debugLogs end: cilium-458000 [took: 2.131567583s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-458000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-458000
--- SKIP: TestNetworkPlugins/group/cilium (2.36s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-771000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-771000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.32s)

                                                
                                    
Copied to clipboard