Test Report: QEMU_macOS 18932

                    
                      ef88892450886ee42051bb5f4cefdb4041e06670:2024-05-20:34547
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.74
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.04
27 TestAddons/Setup 10.49
28 TestCertOptions 10.12
29 TestCertExpiration 195.23
30 TestDockerFlags 10.15
31 TestForceSystemdFlag 10.08
32 TestForceSystemdEnv 11.23
38 TestErrorSpam/setup 9.84
47 TestFunctional/serial/StartWithProxy 9.8
49 TestFunctional/serial/SoftStart 5.25
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.94
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.19
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.26
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 119.02
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.42
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.47
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.59
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 36.39
141 TestMultiControlPlane/serial/StartCluster 10.13
142 TestMultiControlPlane/serial/DeployApp 104.4
143 TestMultiControlPlane/serial/PingHostFromPods 0.08
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.1
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 39.74
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.27
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.7
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.94
165 TestJSONOutput/start/Command 9.79
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.04
194 TestMinikubeProfile 10.29
197 TestMountStart/serial/StartWithMountFirst 10.21
200 TestMultiNode/serial/FreshStart2Nodes 9.98
201 TestMultiNode/serial/DeployApp2Nodes 82
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.13
208 TestMultiNode/serial/StartAfterStop 51.91
209 TestMultiNode/serial/RestartKeepsNodes 7.49
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.62
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.54
217 TestPreload 9.94
219 TestScheduledStopUnix 10.21
220 TestSkaffold 12.38
223 TestRunningBinaryUpgrade 601.06
225 TestKubernetesUpgrade 17.61
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.19
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.54
241 TestStoppedBinaryUpgrade/Upgrade 576.65
243 TestPause/serial/Start 10.09
253 TestNoKubernetes/serial/StartWithK8s 9.98
254 TestNoKubernetes/serial/StartWithStopK8s 5.29
255 TestNoKubernetes/serial/Start 5.3
259 TestNoKubernetes/serial/StartNoArgs 5.29
261 TestNetworkPlugins/group/auto/Start 9.93
262 TestNetworkPlugins/group/flannel/Start 9.84
263 TestNetworkPlugins/group/kindnet/Start 9.85
264 TestNetworkPlugins/group/enable-default-cni/Start 9.8
265 TestNetworkPlugins/group/bridge/Start 9.78
266 TestNetworkPlugins/group/kubenet/Start 9.83
267 TestNetworkPlugins/group/custom-flannel/Start 9.68
268 TestNetworkPlugins/group/calico/Start 9.74
269 TestNetworkPlugins/group/false/Start 9.8
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 10.15
285 TestStartStop/group/embed-certs/serial/FirstStart 12.27
286 TestStartStop/group/no-preload/serial/DeployApp 0.1
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
290 TestStartStop/group/no-preload/serial/SecondStart 5.89
291 TestStartStop/group/embed-certs/serial/DeployApp 0.09
292 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
296 TestStartStop/group/no-preload/serial/Pause 0.1
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
301 TestStartStop/group/embed-certs/serial/SecondStart 7.15
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
303 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
306 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
307 TestStartStop/group/embed-certs/serial/Pause 0.11
310 TestStartStop/group/newest-cni/serial/FirstStart 9.82
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.71
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-078000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-078000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.741959916s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f008b18e-172e-4477-99c0-2e372ee1519b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-078000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"566061a5-488b-4697-a920-22c53b7af3c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18932"}}
	{"specversion":"1.0","id":"c99305db-3d5f-4487-869e-bf4de49806f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig"}}
	{"specversion":"1.0","id":"7f24ceda-4684-4e94-a9c4-1de5b34a44d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"925b393f-890e-445c-ba8c-2e2f6282311e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b3f66f88-e20f-4285-ac1b-929af52d5c8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube"}}
	{"specversion":"1.0","id":"2540af0f-b164-4d7c-bd72-085d7dd6e33e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6c18910a-4b92-41bb-8498-037961295a11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"86106ac0-dfac-45bc-8f25-ec8a8fdb57d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"06ed1ea5-3396-4f21-9772-746d262b5c65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5670df94-9502-445d-99fe-5e14dac27c59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-078000\" primary control-plane node in \"download-only-078000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3e81b6a-8bee-433f-8555-1792edca96db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"39123360-4e05-4ac1-a5c1-36a1e8b6ddd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108545380 0x108545380 0x108545380 0x108545380 0x108545380 0x108545380 0x108545380] Decompressors:map[bz2:0x14000906230 gz:0x14000906238 tar:0x14000906180 tar.bz2:0x140009061a0 tar.gz:0x140009061c0 tar.xz:0x140009061e0 tar.zst:0x14000906210 tbz2:0x140009061a0 tgz:0x1
40009061c0 txz:0x140009061e0 tzst:0x14000906210 xz:0x14000906250 zip:0x14000906280 zst:0x14000906258] Getters:map[file:0x140012046c0 http:0x140005242d0 https:0x14000524320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"890ce133-56ad-4cce-8d89-31da587646a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:15:12.589288   14897 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:15:12.589440   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:12.589443   14897 out.go:304] Setting ErrFile to fd 2...
	I0520 04:15:12.589446   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:12.589555   14897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	W0520 04:15:12.589644   14897 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18932-14402/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18932-14402/.minikube/config/config.json: no such file or directory
	I0520 04:15:12.590874   14897 out.go:298] Setting JSON to true
	I0520 04:15:12.608642   14897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8083,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:15:12.608713   14897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:15:12.614062   14897 out.go:97] [download-only-078000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:15:12.617365   14897 out.go:169] MINIKUBE_LOCATION=18932
	I0520 04:15:12.614247   14897 notify.go:220] Checking for updates...
	W0520 04:15:12.614256   14897 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 04:15:12.626661   14897 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:15:12.630184   14897 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:15:12.633046   14897 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:15:12.636881   14897 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	W0520 04:15:12.645016   14897 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 04:15:12.645207   14897 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:15:12.648094   14897 out.go:97] Using the qemu2 driver based on user configuration
	I0520 04:15:12.648113   14897 start.go:297] selected driver: qemu2
	I0520 04:15:12.648127   14897 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:15:12.648183   14897 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:15:12.651106   14897 out.go:169] Automatically selected the socket_vmnet network
	I0520 04:15:12.654796   14897 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 04:15:12.654886   14897 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:15:12.654918   14897 cni.go:84] Creating CNI manager for ""
	I0520 04:15:12.654934   14897 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:15:12.654976   14897 start.go:340] cluster config:
	{Name:download-only-078000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-078000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:15:12.659911   14897 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:15:12.664149   14897 out.go:97] Downloading VM boot image ...
	I0520 04:15:12.664164   14897 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso
	I0520 04:15:16.943341   14897 out.go:97] Starting "download-only-078000" primary control-plane node in "download-only-078000" cluster
	I0520 04:15:16.943366   14897 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:15:16.999037   14897 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:15:16.999050   14897 cache.go:56] Caching tarball of preloaded images
	I0520 04:15:16.999871   14897 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:15:17.011017   14897 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 04:15:17.011024   14897 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:17.089237   14897 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:15:22.199198   14897 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:22.199362   14897 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:22.896045   14897 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:15:22.896268   14897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/download-only-078000/config.json ...
	I0520 04:15:22.896285   14897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/download-only-078000/config.json: {Name:mkd359158ddefb93e2ed43be99a3144ab2d9a0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:15:22.896552   14897 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:15:22.897422   14897 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0520 04:15:23.253992   14897 out.go:169] 
	W0520 04:15:23.258288   14897 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108545380 0x108545380 0x108545380 0x108545380 0x108545380 0x108545380 0x108545380] Decompressors:map[bz2:0x14000906230 gz:0x14000906238 tar:0x14000906180 tar.bz2:0x140009061a0 tar.gz:0x140009061c0 tar.xz:0x140009061e0 tar.zst:0x14000906210 tbz2:0x140009061a0 tgz:0x140009061c0 txz:0x140009061e0 tzst:0x14000906210 xz:0x14000906250 zip:0x14000906280 zst:0x14000906258] Getters:map[file:0x140012046c0 http:0x140005242d0 https:0x14000524320] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 04:15:23.258314   14897 out_reason.go:110] 
	W0520 04:15:23.266311   14897 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:15:23.270147   14897 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-078000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-232000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-232000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.869282375s)

                                                
                                                
-- stdout --
	* [offline-docker-232000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-232000" primary control-plane node in "offline-docker-232000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-232000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:26:40.791303   16507 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:26:40.791458   16507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:40.791461   16507 out.go:304] Setting ErrFile to fd 2...
	I0520 04:26:40.791464   16507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:40.791589   16507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:26:40.792897   16507 out.go:298] Setting JSON to false
	I0520 04:26:40.810422   16507 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8771,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:26:40.810498   16507 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:26:40.816338   16507 out.go:177] * [offline-docker-232000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:26:40.823341   16507 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:26:40.823374   16507 notify.go:220] Checking for updates...
	I0520 04:26:40.830245   16507 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:26:40.833306   16507 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:26:40.836274   16507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:26:40.837744   16507 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:26:40.840245   16507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:26:40.843627   16507 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:26:40.843684   16507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:26:40.847133   16507 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:26:40.854241   16507 start.go:297] selected driver: qemu2
	I0520 04:26:40.854253   16507 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:26:40.854261   16507 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:26:40.856205   16507 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:26:40.860189   16507 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:26:40.864301   16507 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:26:40.864324   16507 cni.go:84] Creating CNI manager for ""
	I0520 04:26:40.864331   16507 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:26:40.864335   16507 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:26:40.864376   16507 start.go:340] cluster config:
	{Name:offline-docker-232000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:26:40.869055   16507 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:40.876287   16507 out.go:177] * Starting "offline-docker-232000" primary control-plane node in "offline-docker-232000" cluster
	I0520 04:26:40.879286   16507 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:26:40.879326   16507 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:26:40.879342   16507 cache.go:56] Caching tarball of preloaded images
	I0520 04:26:40.879427   16507 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:26:40.879432   16507 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:26:40.879498   16507 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/offline-docker-232000/config.json ...
	I0520 04:26:40.879508   16507 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/offline-docker-232000/config.json: {Name:mke53b3663b3048b25c283cbf7e26f366aff701a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:26:40.879797   16507 start.go:360] acquireMachinesLock for offline-docker-232000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:26:40.879831   16507 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "offline-docker-232000"
	I0520 04:26:40.879843   16507 start.go:93] Provisioning new machine with config: &{Name:offline-docker-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:26:40.879877   16507 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:26:40.883304   16507 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:26:40.898977   16507 start.go:159] libmachine.API.Create for "offline-docker-232000" (driver="qemu2")
	I0520 04:26:40.899007   16507 client.go:168] LocalClient.Create starting
	I0520 04:26:40.899071   16507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:26:40.899106   16507 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:40.899117   16507 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:40.899165   16507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:26:40.899188   16507 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:40.899195   16507 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:40.899574   16507 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:26:41.032636   16507 main.go:141] libmachine: Creating SSH key...
	I0520 04:26:41.191412   16507 main.go:141] libmachine: Creating Disk image...
	I0520 04:26:41.191423   16507 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:26:41.191696   16507 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2
	I0520 04:26:41.207888   16507 main.go:141] libmachine: STDOUT: 
	I0520 04:26:41.207927   16507 main.go:141] libmachine: STDERR: 
	I0520 04:26:41.207988   16507 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2 +20000M
	I0520 04:26:41.220639   16507 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:26:41.220660   16507 main.go:141] libmachine: STDERR: 
	I0520 04:26:41.220687   16507 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2
	I0520 04:26:41.220692   16507 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:26:41.220728   16507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:bb:9e:8f:d7:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2
	I0520 04:26:41.223060   16507 main.go:141] libmachine: STDOUT: 
	I0520 04:26:41.223086   16507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:26:41.223112   16507 client.go:171] duration metric: took 324.101458ms to LocalClient.Create
	I0520 04:26:43.223728   16507 start.go:128] duration metric: took 2.343870959s to createHost
	I0520 04:26:43.223748   16507 start.go:83] releasing machines lock for "offline-docker-232000", held for 2.343940083s
	W0520 04:26:43.223763   16507 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:43.232279   16507 out.go:177] * Deleting "offline-docker-232000" in qemu2 ...
	W0520 04:26:43.241050   16507 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:43.241066   16507 start.go:728] Will try again in 5 seconds ...
	I0520 04:26:48.243076   16507 start.go:360] acquireMachinesLock for offline-docker-232000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:26:48.243189   16507 start.go:364] duration metric: took 89.5µs to acquireMachinesLock for "offline-docker-232000"
	I0520 04:26:48.243220   16507 start.go:93] Provisioning new machine with config: &{Name:offline-docker-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:26:48.243298   16507 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:26:48.252424   16507 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:26:48.268092   16507 start.go:159] libmachine.API.Create for "offline-docker-232000" (driver="qemu2")
	I0520 04:26:48.268118   16507 client.go:168] LocalClient.Create starting
	I0520 04:26:48.268207   16507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:26:48.268241   16507 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:48.268249   16507 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:48.268282   16507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:26:48.268308   16507 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:48.268314   16507 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:48.268600   16507 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:26:48.439790   16507 main.go:141] libmachine: Creating SSH key...
	I0520 04:26:48.562625   16507 main.go:141] libmachine: Creating Disk image...
	I0520 04:26:48.562635   16507 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:26:48.562851   16507 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2
	I0520 04:26:48.577076   16507 main.go:141] libmachine: STDOUT: 
	I0520 04:26:48.577106   16507 main.go:141] libmachine: STDERR: 
	I0520 04:26:48.577165   16507 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2 +20000M
	I0520 04:26:48.588844   16507 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:26:48.588866   16507 main.go:141] libmachine: STDERR: 
	I0520 04:26:48.588879   16507 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2
	I0520 04:26:48.588884   16507 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:26:48.588916   16507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:2e:81:bc:80:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/offline-docker-232000/disk.qcow2
	I0520 04:26:48.590679   16507 main.go:141] libmachine: STDOUT: 
	I0520 04:26:48.590696   16507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:26:48.590713   16507 client.go:171] duration metric: took 322.595875ms to LocalClient.Create
	I0520 04:26:50.592980   16507 start.go:128] duration metric: took 2.349676958s to createHost
	I0520 04:26:50.593057   16507 start.go:83] releasing machines lock for "offline-docker-232000", held for 2.349886583s
	W0520 04:26:50.593515   16507 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:50.601110   16507 out.go:177] 
	W0520 04:26:50.605154   16507 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:26:50.605194   16507 out.go:239] * 
	* 
	W0520 04:26:50.607975   16507 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:26:50.618074   16507 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-232000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-05-20 04:26:50.638816 -0700 PDT m=+698.141526542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-232000 -n offline-docker-232000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-232000 -n offline-docker-232000: exit status 7 (62.969667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-232000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-232000
--- FAIL: TestOffline (10.04s)

                                                
                                    
x
+
TestAddons/Setup (10.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-313000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-313000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.488125s)

                                                
                                                
-- stdout --
	* [addons-313000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-313000" primary control-plane node in "addons-313000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-313000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:15:31.594702   15006 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:15:31.594832   15006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:31.594838   15006 out.go:304] Setting ErrFile to fd 2...
	I0520 04:15:31.594849   15006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:31.594966   15006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:15:31.596099   15006 out.go:298] Setting JSON to false
	I0520 04:15:31.612255   15006 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8102,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:15:31.612315   15006 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:15:31.616024   15006 out.go:177] * [addons-313000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:15:31.622949   15006 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:15:31.626918   15006 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:15:31.623005   15006 notify.go:220] Checking for updates...
	I0520 04:15:31.632894   15006 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:15:31.635952   15006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:15:31.638887   15006 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:15:31.642009   15006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:15:31.645127   15006 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:15:31.648931   15006 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:15:31.655922   15006 start.go:297] selected driver: qemu2
	I0520 04:15:31.655928   15006 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:15:31.655933   15006 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:15:31.658145   15006 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:15:31.660957   15006 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:15:31.664006   15006 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:15:31.664023   15006 cni.go:84] Creating CNI manager for ""
	I0520 04:15:31.664034   15006 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:15:31.664043   15006 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:15:31.664087   15006 start.go:340] cluster config:
	{Name:addons-313000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:15:31.668670   15006 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:15:31.675842   15006 out.go:177] * Starting "addons-313000" primary control-plane node in "addons-313000" cluster
	I0520 04:15:31.679953   15006 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:15:31.679970   15006 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:15:31.679981   15006 cache.go:56] Caching tarball of preloaded images
	I0520 04:15:31.680035   15006 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:15:31.680040   15006 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:15:31.680250   15006 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/addons-313000/config.json ...
	I0520 04:15:31.680261   15006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/addons-313000/config.json: {Name:mk99b1ec20fe899043a3363a425b34414c645a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:15:31.680641   15006 start.go:360] acquireMachinesLock for addons-313000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:15:31.680704   15006 start.go:364] duration metric: took 56.708µs to acquireMachinesLock for "addons-313000"
	I0520 04:15:31.680716   15006 start.go:93] Provisioning new machine with config: &{Name:addons-313000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:15:31.680742   15006 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:15:31.687900   15006 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 04:15:31.705841   15006 start.go:159] libmachine.API.Create for "addons-313000" (driver="qemu2")
	I0520 04:15:31.705871   15006 client.go:168] LocalClient.Create starting
	I0520 04:15:31.705991   15006 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:15:31.912705   15006 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:15:32.014014   15006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:15:32.503508   15006 main.go:141] libmachine: Creating SSH key...
	I0520 04:15:32.566890   15006 main.go:141] libmachine: Creating Disk image...
	I0520 04:15:32.566898   15006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:15:32.567106   15006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2
	I0520 04:15:32.579862   15006 main.go:141] libmachine: STDOUT: 
	I0520 04:15:32.579893   15006 main.go:141] libmachine: STDERR: 
	I0520 04:15:32.579952   15006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2 +20000M
	I0520 04:15:32.590706   15006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:15:32.590723   15006 main.go:141] libmachine: STDERR: 
	I0520 04:15:32.590743   15006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2
	I0520 04:15:32.590752   15006 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:15:32.590787   15006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:0d:e3:fe:3a:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2
	I0520 04:15:32.592602   15006 main.go:141] libmachine: STDOUT: 
	I0520 04:15:32.592618   15006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:15:32.592640   15006 client.go:171] duration metric: took 886.774458ms to LocalClient.Create
	I0520 04:15:34.594780   15006 start.go:128] duration metric: took 2.914056s to createHost
	I0520 04:15:34.594905   15006 start.go:83] releasing machines lock for "addons-313000", held for 2.914226958s
	W0520 04:15:34.594953   15006 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:15:34.602383   15006 out.go:177] * Deleting "addons-313000" in qemu2 ...
	W0520 04:15:34.627811   15006 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:15:34.627840   15006 start.go:728] Will try again in 5 seconds ...
	I0520 04:15:39.630005   15006 start.go:360] acquireMachinesLock for addons-313000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:15:39.630464   15006 start.go:364] duration metric: took 347.042µs to acquireMachinesLock for "addons-313000"
	I0520 04:15:39.630593   15006 start.go:93] Provisioning new machine with config: &{Name:addons-313000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:15:39.630937   15006 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:15:39.641695   15006 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 04:15:39.691469   15006 start.go:159] libmachine.API.Create for "addons-313000" (driver="qemu2")
	I0520 04:15:39.691510   15006 client.go:168] LocalClient.Create starting
	I0520 04:15:39.691630   15006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:15:39.691689   15006 main.go:141] libmachine: Decoding PEM data...
	I0520 04:15:39.691709   15006 main.go:141] libmachine: Parsing certificate...
	I0520 04:15:39.691794   15006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:15:39.691837   15006 main.go:141] libmachine: Decoding PEM data...
	I0520 04:15:39.691852   15006 main.go:141] libmachine: Parsing certificate...
	I0520 04:15:39.692514   15006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:15:39.831271   15006 main.go:141] libmachine: Creating SSH key...
	I0520 04:15:39.988068   15006 main.go:141] libmachine: Creating Disk image...
	I0520 04:15:39.988074   15006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:15:39.988287   15006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2
	I0520 04:15:40.001053   15006 main.go:141] libmachine: STDOUT: 
	I0520 04:15:40.001070   15006 main.go:141] libmachine: STDERR: 
	I0520 04:15:40.001131   15006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2 +20000M
	I0520 04:15:40.012246   15006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:15:40.012262   15006 main.go:141] libmachine: STDERR: 
	I0520 04:15:40.012273   15006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2
	I0520 04:15:40.012276   15006 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:15:40.012315   15006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:85:ad:18:15:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/addons-313000/disk.qcow2
	I0520 04:15:40.014084   15006 main.go:141] libmachine: STDOUT: 
	I0520 04:15:40.014111   15006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:15:40.014122   15006 client.go:171] duration metric: took 322.6115ms to LocalClient.Create
	I0520 04:15:42.016372   15006 start.go:128] duration metric: took 2.385409166s to createHost
	I0520 04:15:42.016442   15006 start.go:83] releasing machines lock for "addons-313000", held for 2.385982709s
	W0520 04:15:42.016795   15006 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-313000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-313000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:15:42.026242   15006 out.go:177] 
	W0520 04:15:42.031459   15006 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:15:42.031500   15006 out.go:239] * 
	* 
	W0520 04:15:42.034362   15006 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:15:42.042337   15006 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-313000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.49s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-214000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-214000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.841054666s)

                                                
                                                
-- stdout --
	* [cert-options-214000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-214000" primary control-plane node in "cert-options-214000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-214000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-214000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-214000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-214000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.274167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-214000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-214000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-214000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-214000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-214000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-214000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.277667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-214000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-214000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-214000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-214000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-214000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-05-20 04:27:22.17379 -0700 PDT m=+729.676878917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-214000 -n cert-options-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-214000 -n cert-options-214000: exit status 7 (29.252458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-214000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-214000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-214000
--- FAIL: TestCertOptions (10.12s)

                                                
                                    
x
+
TestCertExpiration (195.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.851002792s)

                                                
                                                
-- stdout --
	* [cert-expiration-169000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-169000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.221208166s)

                                                
                                                
-- stdout --
	* [cert-expiration-169000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-169000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-169000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-169000" primary control-plane node in "cert-expiration-169000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-169000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-169000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-20 04:30:22.209216 -0700 PDT m=+909.714469167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-169000 -n cert-expiration-169000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-169000 -n cert-expiration-169000: exit status 7 (62.181208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-169000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-169000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-169000
--- FAIL: TestCertExpiration (195.23s)

                                                
                                    
x
+
TestDockerFlags (10.15s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-248000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-248000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.88956775s)

                                                
                                                
-- stdout --
	* [docker-flags-248000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-248000" primary control-plane node in "docker-flags-248000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-248000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:27:02.055467   16703 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:27:02.055615   16703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:27:02.055618   16703 out.go:304] Setting ErrFile to fd 2...
	I0520 04:27:02.055621   16703 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:27:02.055750   16703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:27:02.056848   16703 out.go:298] Setting JSON to false
	I0520 04:27:02.072848   16703 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8793,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:27:02.072911   16703 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:27:02.076368   16703 out.go:177] * [docker-flags-248000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:27:02.084228   16703 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:27:02.084286   16703 notify.go:220] Checking for updates...
	I0520 04:27:02.091287   16703 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:27:02.094262   16703 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:27:02.097231   16703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:27:02.100223   16703 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:27:02.103249   16703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:27:02.104990   16703 config.go:182] Loaded profile config "force-systemd-flag-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:27:02.105061   16703 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:27:02.105107   16703 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:27:02.109193   16703 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:27:02.115096   16703 start.go:297] selected driver: qemu2
	I0520 04:27:02.115102   16703 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:27:02.115108   16703 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:27:02.117296   16703 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:27:02.120248   16703 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:27:02.123316   16703 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0520 04:27:02.123329   16703 cni.go:84] Creating CNI manager for ""
	I0520 04:27:02.123335   16703 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:27:02.123339   16703 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:27:02.123364   16703 start.go:340] cluster config:
	{Name:docker-flags-248000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:27:02.127779   16703 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:27:02.135241   16703 out.go:177] * Starting "docker-flags-248000" primary control-plane node in "docker-flags-248000" cluster
	I0520 04:27:02.139252   16703 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:27:02.139269   16703 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:27:02.139284   16703 cache.go:56] Caching tarball of preloaded images
	I0520 04:27:02.139357   16703 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:27:02.139362   16703 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:27:02.139437   16703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/docker-flags-248000/config.json ...
	I0520 04:27:02.139449   16703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/docker-flags-248000/config.json: {Name:mkb95475db9d1029a0e0104d1c65a706083e3334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:27:02.139657   16703 start.go:360] acquireMachinesLock for docker-flags-248000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:27:02.139692   16703 start.go:364] duration metric: took 28µs to acquireMachinesLock for "docker-flags-248000"
	I0520 04:27:02.139705   16703 start.go:93] Provisioning new machine with config: &{Name:docker-flags-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:27:02.139738   16703 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:27:02.148207   16703 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:27:02.165325   16703 start.go:159] libmachine.API.Create for "docker-flags-248000" (driver="qemu2")
	I0520 04:27:02.165355   16703 client.go:168] LocalClient.Create starting
	I0520 04:27:02.165409   16703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:27:02.165439   16703 main.go:141] libmachine: Decoding PEM data...
	I0520 04:27:02.165448   16703 main.go:141] libmachine: Parsing certificate...
	I0520 04:27:02.165493   16703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:27:02.165514   16703 main.go:141] libmachine: Decoding PEM data...
	I0520 04:27:02.165520   16703 main.go:141] libmachine: Parsing certificate...
	I0520 04:27:02.165847   16703 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:27:02.294097   16703 main.go:141] libmachine: Creating SSH key...
	I0520 04:27:02.362346   16703 main.go:141] libmachine: Creating Disk image...
	I0520 04:27:02.362351   16703 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:27:02.362519   16703 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2
	I0520 04:27:02.375196   16703 main.go:141] libmachine: STDOUT: 
	I0520 04:27:02.375220   16703 main.go:141] libmachine: STDERR: 
	I0520 04:27:02.375286   16703 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2 +20000M
	I0520 04:27:02.386096   16703 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:27:02.386111   16703 main.go:141] libmachine: STDERR: 
	I0520 04:27:02.386134   16703 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2
	I0520 04:27:02.386140   16703 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:27:02.386175   16703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:a4:83:9d:dc:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2
	I0520 04:27:02.387915   16703 main.go:141] libmachine: STDOUT: 
	I0520 04:27:02.387930   16703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:27:02.387956   16703 client.go:171] duration metric: took 222.59925ms to LocalClient.Create
	I0520 04:27:04.390122   16703 start.go:128] duration metric: took 2.2503895s to createHost
	I0520 04:27:04.390227   16703 start.go:83] releasing machines lock for "docker-flags-248000", held for 2.250502458s
	W0520 04:27:04.390299   16703 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:27:04.403429   16703 out.go:177] * Deleting "docker-flags-248000" in qemu2 ...
	W0520 04:27:04.428762   16703 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:27:04.428789   16703 start.go:728] Will try again in 5 seconds ...
	I0520 04:27:09.430990   16703 start.go:360] acquireMachinesLock for docker-flags-248000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:27:09.541574   16703 start.go:364] duration metric: took 110.464375ms to acquireMachinesLock for "docker-flags-248000"
	I0520 04:27:09.541716   16703 start.go:93] Provisioning new machine with config: &{Name:docker-flags-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:27:09.541962   16703 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:27:09.550587   16703 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:27:09.599480   16703 start.go:159] libmachine.API.Create for "docker-flags-248000" (driver="qemu2")
	I0520 04:27:09.599526   16703 client.go:168] LocalClient.Create starting
	I0520 04:27:09.599662   16703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:27:09.599732   16703 main.go:141] libmachine: Decoding PEM data...
	I0520 04:27:09.599753   16703 main.go:141] libmachine: Parsing certificate...
	I0520 04:27:09.599811   16703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:27:09.599854   16703 main.go:141] libmachine: Decoding PEM data...
	I0520 04:27:09.599865   16703 main.go:141] libmachine: Parsing certificate...
	I0520 04:27:09.600476   16703 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:27:09.748807   16703 main.go:141] libmachine: Creating SSH key...
	I0520 04:27:09.838369   16703 main.go:141] libmachine: Creating Disk image...
	I0520 04:27:09.838375   16703 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:27:09.838583   16703 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2
	I0520 04:27:09.851145   16703 main.go:141] libmachine: STDOUT: 
	I0520 04:27:09.851189   16703 main.go:141] libmachine: STDERR: 
	I0520 04:27:09.851244   16703 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2 +20000M
	I0520 04:27:09.866136   16703 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:27:09.866162   16703 main.go:141] libmachine: STDERR: 
	I0520 04:27:09.866175   16703 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2
	I0520 04:27:09.866178   16703 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:27:09.866218   16703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:70:59:83:d4:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/docker-flags-248000/disk.qcow2
	I0520 04:27:09.867933   16703 main.go:141] libmachine: STDOUT: 
	I0520 04:27:09.867965   16703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:27:09.867977   16703 client.go:171] duration metric: took 268.448334ms to LocalClient.Create
	I0520 04:27:11.870135   16703 start.go:128] duration metric: took 2.328172042s to createHost
	I0520 04:27:11.870190   16703 start.go:83] releasing machines lock for "docker-flags-248000", held for 2.328600125s
	W0520 04:27:11.870531   16703 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-248000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-248000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:27:11.882285   16703 out.go:177] 
	W0520 04:27:11.890462   16703 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:27:11.890521   16703 out.go:239] * 
	* 
	W0520 04:27:11.893201   16703 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:27:11.903131   16703 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-248000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-248000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-248000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.627791ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-248000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-248000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-248000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-248000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-248000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-248000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-248000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-248000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-248000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.288583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-248000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-248000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-248000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-248000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-248000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-248000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-05-20 04:27:12.044063 -0700 PDT m=+719.547030584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-248000 -n docker-flags-248000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-248000 -n docker-flags-248000: exit status 7 (27.969834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-248000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-248000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-248000
--- FAIL: TestDockerFlags (10.15s)

                                                
                                    
x
+
TestForceSystemdFlag (10.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-401000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-401000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.872988458s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-401000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-401000" primary control-plane node in "force-systemd-flag-401000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-401000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:26:57.083891   16680 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:26:57.084038   16680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:57.084042   16680 out.go:304] Setting ErrFile to fd 2...
	I0520 04:26:57.084044   16680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:57.084142   16680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:26:57.085186   16680 out.go:298] Setting JSON to false
	I0520 04:26:57.101303   16680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8788,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:26:57.101367   16680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:26:57.106191   16680 out.go:177] * [force-systemd-flag-401000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:26:57.112036   16680 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:26:57.112101   16680 notify.go:220] Checking for updates...
	I0520 04:26:57.116183   16680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:26:57.119171   16680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:26:57.123081   16680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:26:57.126139   16680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:26:57.129183   16680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:26:57.132552   16680 config.go:182] Loaded profile config "force-systemd-env-790000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:26:57.132619   16680 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:26:57.132676   16680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:26:57.137099   16680 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:26:57.144136   16680 start.go:297] selected driver: qemu2
	I0520 04:26:57.144145   16680 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:26:57.144151   16680 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:26:57.146297   16680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:26:57.149090   16680 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:26:57.152139   16680 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:26:57.152151   16680 cni.go:84] Creating CNI manager for ""
	I0520 04:26:57.152160   16680 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:26:57.152164   16680 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:26:57.152197   16680 start.go:340] cluster config:
	{Name:force-systemd-flag-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:26:57.156527   16680 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:57.163994   16680 out.go:177] * Starting "force-systemd-flag-401000" primary control-plane node in "force-systemd-flag-401000" cluster
	I0520 04:26:57.168077   16680 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:26:57.168093   16680 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:26:57.168109   16680 cache.go:56] Caching tarball of preloaded images
	I0520 04:26:57.168165   16680 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:26:57.168173   16680 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:26:57.168248   16680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/force-systemd-flag-401000/config.json ...
	I0520 04:26:57.168260   16680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/force-systemd-flag-401000/config.json: {Name:mk265d18fd4021c0fb3f0404bd0ac46d701235a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:26:57.168533   16680 start.go:360] acquireMachinesLock for force-systemd-flag-401000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:26:57.168569   16680 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "force-systemd-flag-401000"
	I0520 04:26:57.168581   16680 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:26:57.168612   16680 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:26:57.176062   16680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:26:57.192819   16680 start.go:159] libmachine.API.Create for "force-systemd-flag-401000" (driver="qemu2")
	I0520 04:26:57.192846   16680 client.go:168] LocalClient.Create starting
	I0520 04:26:57.192911   16680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:26:57.192942   16680 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:57.192951   16680 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:57.193002   16680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:26:57.193024   16680 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:57.193032   16680 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:57.193368   16680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:26:57.322588   16680 main.go:141] libmachine: Creating SSH key...
	I0520 04:26:57.412037   16680 main.go:141] libmachine: Creating Disk image...
	I0520 04:26:57.412042   16680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:26:57.412220   16680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2
	I0520 04:26:57.424487   16680 main.go:141] libmachine: STDOUT: 
	I0520 04:26:57.424508   16680 main.go:141] libmachine: STDERR: 
	I0520 04:26:57.424566   16680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2 +20000M
	I0520 04:26:57.435608   16680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:26:57.435626   16680 main.go:141] libmachine: STDERR: 
	I0520 04:26:57.435640   16680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2
	I0520 04:26:57.435645   16680 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:26:57.435688   16680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:cb:ef:18:00:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2
	I0520 04:26:57.437361   16680 main.go:141] libmachine: STDOUT: 
	I0520 04:26:57.437379   16680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:26:57.437399   16680 client.go:171] duration metric: took 244.550708ms to LocalClient.Create
	I0520 04:26:59.439595   16680 start.go:128] duration metric: took 2.270988625s to createHost
	I0520 04:26:59.439699   16680 start.go:83] releasing machines lock for "force-systemd-flag-401000", held for 2.271105708s
	W0520 04:26:59.439802   16680 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:59.458799   16680 out.go:177] * Deleting "force-systemd-flag-401000" in qemu2 ...
	W0520 04:26:59.477583   16680 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:59.477621   16680 start.go:728] Will try again in 5 seconds ...
	I0520 04:27:04.479772   16680 start.go:360] acquireMachinesLock for force-systemd-flag-401000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:27:04.480232   16680 start.go:364] duration metric: took 374.25µs to acquireMachinesLock for "force-systemd-flag-401000"
	I0520 04:27:04.480353   16680 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:27:04.480612   16680 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:27:04.489437   16680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:27:04.539647   16680 start.go:159] libmachine.API.Create for "force-systemd-flag-401000" (driver="qemu2")
	I0520 04:27:04.539717   16680 client.go:168] LocalClient.Create starting
	I0520 04:27:04.539825   16680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:27:04.539888   16680 main.go:141] libmachine: Decoding PEM data...
	I0520 04:27:04.539904   16680 main.go:141] libmachine: Parsing certificate...
	I0520 04:27:04.539972   16680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:27:04.540015   16680 main.go:141] libmachine: Decoding PEM data...
	I0520 04:27:04.540028   16680 main.go:141] libmachine: Parsing certificate...
	I0520 04:27:04.540991   16680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:27:04.685739   16680 main.go:141] libmachine: Creating SSH key...
	I0520 04:27:04.856836   16680 main.go:141] libmachine: Creating Disk image...
	I0520 04:27:04.856843   16680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:27:04.857039   16680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2
	I0520 04:27:04.869987   16680 main.go:141] libmachine: STDOUT: 
	I0520 04:27:04.870009   16680 main.go:141] libmachine: STDERR: 
	I0520 04:27:04.870067   16680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2 +20000M
	I0520 04:27:04.880877   16680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:27:04.880898   16680 main.go:141] libmachine: STDERR: 
	I0520 04:27:04.880911   16680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2
	I0520 04:27:04.880916   16680 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:27:04.880974   16680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:c5:24:42:50:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-flag-401000/disk.qcow2
	I0520 04:27:04.882665   16680 main.go:141] libmachine: STDOUT: 
	I0520 04:27:04.882680   16680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:27:04.882694   16680 client.go:171] duration metric: took 342.972875ms to LocalClient.Create
	I0520 04:27:06.884899   16680 start.go:128] duration metric: took 2.404287208s to createHost
	I0520 04:27:06.884947   16680 start.go:83] releasing machines lock for "force-systemd-flag-401000", held for 2.404715167s
	W0520 04:27:06.885266   16680 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:27:06.897939   16680 out.go:177] 
	W0520 04:27:06.901819   16680 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:27:06.901852   16680 out.go:239] * 
	* 
	W0520 04:27:06.904566   16680 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:27:06.915801   16680 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-401000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-401000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-401000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.879833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-401000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-401000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-401000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-05-20 04:27:07.009416 -0700 PDT m=+714.512322876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-401000 -n force-systemd-flag-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-401000 -n force-systemd-flag-401000: exit status 7 (32.679209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-401000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-401000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-401000
--- FAIL: TestForceSystemdFlag (10.08s)

                                                
                                    
x
+
TestForceSystemdEnv (11.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-790000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-790000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.012388s)

                                                
                                                
-- stdout --
	* [force-systemd-env-790000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-790000" primary control-plane node in "force-systemd-env-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:26:50.830010   16648 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:26:50.830139   16648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:50.830142   16648 out.go:304] Setting ErrFile to fd 2...
	I0520 04:26:50.830145   16648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:50.830279   16648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:26:50.831382   16648 out.go:298] Setting JSON to false
	I0520 04:26:50.847422   16648 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8781,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:26:50.847535   16648 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:26:50.852091   16648 out.go:177] * [force-systemd-env-790000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:26:50.858883   16648 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:26:50.862978   16648 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:26:50.858933   16648 notify.go:220] Checking for updates...
	I0520 04:26:50.865907   16648 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:26:50.868911   16648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:26:50.871947   16648 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:26:50.874941   16648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0520 04:26:50.878346   16648 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:26:50.878392   16648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:26:50.882944   16648 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:26:50.889885   16648 start.go:297] selected driver: qemu2
	I0520 04:26:50.889892   16648 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:26:50.889897   16648 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:26:50.892128   16648 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:26:50.894889   16648 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:26:50.898030   16648 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:26:50.898054   16648 cni.go:84] Creating CNI manager for ""
	I0520 04:26:50.898061   16648 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:26:50.898065   16648 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:26:50.898100   16648 start.go:340] cluster config:
	{Name:force-systemd-env-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:26:50.902519   16648 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:50.909904   16648 out.go:177] * Starting "force-systemd-env-790000" primary control-plane node in "force-systemd-env-790000" cluster
	I0520 04:26:50.913892   16648 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:26:50.913908   16648 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:26:50.913918   16648 cache.go:56] Caching tarball of preloaded images
	I0520 04:26:50.913975   16648 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:26:50.913981   16648 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:26:50.914034   16648 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/force-systemd-env-790000/config.json ...
	I0520 04:26:50.914044   16648 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/force-systemd-env-790000/config.json: {Name:mkacfd853c9ce25ac157b004cdd57050d3b82066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:26:50.914258   16648 start.go:360] acquireMachinesLock for force-systemd-env-790000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:26:50.914295   16648 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "force-systemd-env-790000"
	I0520 04:26:50.914307   16648 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:26:50.914338   16648 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:26:50.918974   16648 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:26:50.936528   16648 start.go:159] libmachine.API.Create for "force-systemd-env-790000" (driver="qemu2")
	I0520 04:26:50.936557   16648 client.go:168] LocalClient.Create starting
	I0520 04:26:50.936625   16648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:26:50.936660   16648 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:50.936669   16648 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:50.936708   16648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:26:50.936730   16648 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:50.936738   16648 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:50.937084   16648 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:26:51.065670   16648 main.go:141] libmachine: Creating SSH key...
	I0520 04:26:51.173167   16648 main.go:141] libmachine: Creating Disk image...
	I0520 04:26:51.173173   16648 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:26:51.173368   16648 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2
	I0520 04:26:51.185910   16648 main.go:141] libmachine: STDOUT: 
	I0520 04:26:51.185933   16648 main.go:141] libmachine: STDERR: 
	I0520 04:26:51.185983   16648 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2 +20000M
	I0520 04:26:51.196801   16648 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:26:51.196825   16648 main.go:141] libmachine: STDERR: 
	I0520 04:26:51.196838   16648 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2
	I0520 04:26:51.196842   16648 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:26:51.196876   16648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:5b:68:bb:97:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2
	I0520 04:26:51.198528   16648 main.go:141] libmachine: STDOUT: 
	I0520 04:26:51.198547   16648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:26:51.198572   16648 client.go:171] duration metric: took 262.014209ms to LocalClient.Create
	I0520 04:26:53.200618   16648 start.go:128] duration metric: took 2.2862985s to createHost
	I0520 04:26:53.200641   16648 start.go:83] releasing machines lock for "force-systemd-env-790000", held for 2.286369667s
	W0520 04:26:53.200660   16648 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:53.209575   16648 out.go:177] * Deleting "force-systemd-env-790000" in qemu2 ...
	W0520 04:26:53.217753   16648 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:53.217762   16648 start.go:728] Will try again in 5 seconds ...
	I0520 04:26:58.219959   16648 start.go:360] acquireMachinesLock for force-systemd-env-790000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:26:59.439893   16648 start.go:364] duration metric: took 1.219797333s to acquireMachinesLock for "force-systemd-env-790000"
	I0520 04:26:59.440048   16648 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:26:59.440348   16648 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:26:59.450730   16648 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 04:26:59.499692   16648 start.go:159] libmachine.API.Create for "force-systemd-env-790000" (driver="qemu2")
	I0520 04:26:59.499748   16648 client.go:168] LocalClient.Create starting
	I0520 04:26:59.499888   16648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:26:59.499952   16648 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:59.499967   16648 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:59.500028   16648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:26:59.500079   16648 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:59.500094   16648 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:59.500761   16648 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:26:59.648481   16648 main.go:141] libmachine: Creating SSH key...
	I0520 04:26:59.736300   16648 main.go:141] libmachine: Creating Disk image...
	I0520 04:26:59.736305   16648 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:26:59.736518   16648 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2
	I0520 04:26:59.752863   16648 main.go:141] libmachine: STDOUT: 
	I0520 04:26:59.752883   16648 main.go:141] libmachine: STDERR: 
	I0520 04:26:59.752940   16648 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2 +20000M
	I0520 04:26:59.763822   16648 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:26:59.763838   16648 main.go:141] libmachine: STDERR: 
	I0520 04:26:59.763858   16648 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2
	I0520 04:26:59.763863   16648 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:26:59.763899   16648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:45:18:f9:11:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/force-systemd-env-790000/disk.qcow2
	I0520 04:26:59.765540   16648 main.go:141] libmachine: STDOUT: 
	I0520 04:26:59.765557   16648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:26:59.765568   16648 client.go:171] duration metric: took 265.816792ms to LocalClient.Create
	I0520 04:27:01.767800   16648 start.go:128] duration metric: took 2.327432042s to createHost
	I0520 04:27:01.767873   16648 start.go:83] releasing machines lock for "force-systemd-env-790000", held for 2.327952208s
	W0520 04:27:01.768204   16648 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:27:01.781668   16648 out.go:177] 
	W0520 04:27:01.787545   16648 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:27:01.787648   16648 out.go:239] * 
	* 
	W0520 04:27:01.790505   16648 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:27:01.798554   16648 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-790000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-790000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-790000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.862125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-790000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-790000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-790000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-05-20 04:27:01.897401 -0700 PDT m=+709.400246542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-790000 -n force-systemd-env-790000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-790000 -n force-systemd-env-790000: exit status 7 (34.547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-790000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-790000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-790000
--- FAIL: TestForceSystemdEnv (11.23s)

                                                
                                    
x
+
TestErrorSpam/setup (9.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-630000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-630000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 --driver=qemu2 : exit status 80 (9.842445334s)

                                                
                                                
-- stdout --
	* [nospam-630000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-630000" primary control-plane node in "nospam-630000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-630000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-630000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-630000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-630000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-630000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18932
- KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-630000" primary control-plane node in "nospam-630000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-630000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-630000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.84s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-873000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-873000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.728558208s)

                                                
                                                
-- stdout --
	* [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-873000" primary control-plane node in "functional-873000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-873000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52813 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52813 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52813 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-873000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18932
- KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-873000" primary control-plane node in "functional-873000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-873000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52813 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52813 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52813 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (66.837292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.80s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-873000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-873000 --alsologtostderr -v=8: exit status 80 (5.182215709s)

                                                
                                                
-- stdout --
	* [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-873000" primary control-plane node in "functional-873000" cluster
	* Restarting existing qemu2 VM for "functional-873000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-873000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:16:11.971039   15145 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:16:11.971167   15145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:16:11.971170   15145 out.go:304] Setting ErrFile to fd 2...
	I0520 04:16:11.971174   15145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:16:11.971292   15145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:16:11.972343   15145 out.go:298] Setting JSON to false
	I0520 04:16:11.988392   15145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8142,"bootTime":1716195629,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:16:11.988468   15145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:16:11.993951   15145 out.go:177] * [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:16:12.000862   15145 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:16:12.000936   15145 notify.go:220] Checking for updates...
	I0520 04:16:12.007731   15145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:16:12.010912   15145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:16:12.013866   15145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:16:12.015239   15145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:16:12.017848   15145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:16:12.021222   15145 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:16:12.021270   15145 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:16:12.025725   15145 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:16:12.032851   15145 start.go:297] selected driver: qemu2
	I0520 04:16:12.032858   15145 start.go:901] validating driver "qemu2" against &{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:16:12.032928   15145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:16:12.035229   15145 cni.go:84] Creating CNI manager for ""
	I0520 04:16:12.035247   15145 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:16:12.035300   15145 start.go:340] cluster config:
	{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:16:12.039586   15145 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:16:12.046822   15145 out.go:177] * Starting "functional-873000" primary control-plane node in "functional-873000" cluster
	I0520 04:16:12.050893   15145 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:16:12.050911   15145 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:16:12.050930   15145 cache.go:56] Caching tarball of preloaded images
	I0520 04:16:12.050990   15145 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:16:12.050995   15145 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:16:12.051052   15145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/functional-873000/config.json ...
	I0520 04:16:12.051496   15145 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:16:12.051525   15145 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "functional-873000"
	I0520 04:16:12.051535   15145 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:16:12.051543   15145 fix.go:54] fixHost starting: 
	I0520 04:16:12.051651   15145 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
	W0520 04:16:12.051660   15145 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:16:12.059910   15145 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
	I0520 04:16:12.063860   15145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
	I0520 04:16:12.065978   15145 main.go:141] libmachine: STDOUT: 
	I0520 04:16:12.065999   15145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:16:12.066025   15145 fix.go:56] duration metric: took 14.482667ms for fixHost
	I0520 04:16:12.066028   15145 start.go:83] releasing machines lock for "functional-873000", held for 14.499ms
	W0520 04:16:12.066034   15145 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:16:12.066065   15145 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:16:12.066070   15145 start.go:728] Will try again in 5 seconds ...
	I0520 04:16:17.066582   15145 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:16:17.067097   15145 start.go:364] duration metric: took 398.125µs to acquireMachinesLock for "functional-873000"
	I0520 04:16:17.067220   15145 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:16:17.067241   15145 fix.go:54] fixHost starting: 
	I0520 04:16:17.068059   15145 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
	W0520 04:16:17.068089   15145 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:16:17.075624   15145 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
	I0520 04:16:17.079953   15145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
	I0520 04:16:17.090022   15145 main.go:141] libmachine: STDOUT: 
	I0520 04:16:17.090103   15145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:16:17.090245   15145 fix.go:56] duration metric: took 23.005083ms for fixHost
	I0520 04:16:17.090266   15145 start.go:83] releasing machines lock for "functional-873000", held for 23.144917ms
	W0520 04:16:17.090491   15145 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:16:17.097626   15145 out.go:177] 
	W0520 04:16:17.101671   15145 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:16:17.101695   15145 out.go:239] * 
	* 
	W0520 04:16:17.104747   15145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:16:17.110605   15145 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-873000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.184014375s for "functional-873000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (68.255458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.823875ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-873000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.407458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-873000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-873000 get po -A: exit status 1 (26.269375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-873000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-873000\n"*: args "kubectl --context functional-873000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-873000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.594042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl images: exit status 83 (40.86875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.645167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-873000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.97675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.952875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-873000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 kubectl -- --context functional-873000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 kubectl -- --context functional-873000 get pods: exit status 1 (604.814291ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-873000
	* no server found for cluster "functional-873000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-873000 kubectl -- --context functional-873000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (30.69275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-873000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-873000 get pods: exit status 1 (913.706791ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-873000
	* no server found for cluster "functional-873000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-873000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (28.974416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.94s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-873000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-873000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.188076541s)

                                                
                                                
-- stdout --
	* [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-873000" primary control-plane node in "functional-873000" cluster
	* Restarting existing qemu2 VM for "functional-873000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-873000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-873000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.188643208s for "functional-873000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (69.227458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-873000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-873000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.349542ms)

                                                
                                                
** stderr ** 
	error: context "functional-873000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-873000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.285916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 logs: exit status 83 (76.692791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | -p download-only-078000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| delete  | -p download-only-078000                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| start   | -o=json --download-only                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | -p download-only-998000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| delete  | -p download-only-078000                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| start   | --download-only -p                                                       | binary-mirror-947000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | binary-mirror-947000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52781                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-947000                                                  | binary-mirror-947000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| addons  | enable dashboard -p                                                      | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | addons-313000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | addons-313000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-313000 --wait=true                                             | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-313000                                                         | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| start   | -p nospam-630000 -n=1 --memory=2250 --wait=false                         | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-630000                                                         | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | minikube-local-cache-test:functional-873000                              |                      |         |         |                     |                     |
	| cache   | functional-873000 cache delete                                           | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | minikube-local-cache-test:functional-873000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	| ssh     | functional-873000 ssh sudo                                               | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-873000                                                        | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-873000 ssh                                                    | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-873000 cache reload                                           | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	| ssh     | functional-873000 ssh                                                    | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-873000 kubectl --                                             | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | --context functional-873000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:16:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:16:22.194311   15226 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:16:22.194431   15226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:16:22.194433   15226 out.go:304] Setting ErrFile to fd 2...
	I0520 04:16:22.194434   15226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:16:22.194548   15226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:16:22.195609   15226 out.go:298] Setting JSON to false
	I0520 04:16:22.211653   15226 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8153,"bootTime":1716195629,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:16:22.211714   15226 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:16:22.216182   15226 out.go:177] * [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:16:22.225146   15226 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:16:22.229145   15226 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:16:22.225198   15226 notify.go:220] Checking for updates...
	I0520 04:16:22.236140   15226 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:16:22.243966   15226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:16:22.247204   15226 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:16:22.250241   15226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:16:22.253551   15226 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:16:22.253605   15226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:16:22.258133   15226 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:16:22.265169   15226 start.go:297] selected driver: qemu2
	I0520 04:16:22.265173   15226 start.go:901] validating driver "qemu2" against &{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:16:22.265224   15226 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:16:22.267486   15226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:16:22.267507   15226 cni.go:84] Creating CNI manager for ""
	I0520 04:16:22.267514   15226 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:16:22.267564   15226 start.go:340] cluster config:
	{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:16:22.271868   15226 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:16:22.279179   15226 out.go:177] * Starting "functional-873000" primary control-plane node in "functional-873000" cluster
	I0520 04:16:22.283196   15226 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:16:22.283212   15226 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:16:22.283223   15226 cache.go:56] Caching tarball of preloaded images
	I0520 04:16:22.283285   15226 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:16:22.283289   15226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:16:22.283353   15226 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/functional-873000/config.json ...
	I0520 04:16:22.283821   15226 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:16:22.283862   15226 start.go:364] duration metric: took 35.917µs to acquireMachinesLock for "functional-873000"
	I0520 04:16:22.283871   15226 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:16:22.283876   15226 fix.go:54] fixHost starting: 
	I0520 04:16:22.284012   15226 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
	W0520 04:16:22.284020   15226 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:16:22.291056   15226 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
	I0520 04:16:22.295205   15226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
	I0520 04:16:22.297271   15226 main.go:141] libmachine: STDOUT: 
	I0520 04:16:22.297287   15226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:16:22.297316   15226 fix.go:56] duration metric: took 13.441666ms for fixHost
	I0520 04:16:22.297318   15226 start.go:83] releasing machines lock for "functional-873000", held for 13.453541ms
	W0520 04:16:22.297324   15226 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:16:22.297355   15226 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:16:22.297359   15226 start.go:728] Will try again in 5 seconds ...
	I0520 04:16:27.299606   15226 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:16:27.299961   15226 start.go:364] duration metric: took 296.25µs to acquireMachinesLock for "functional-873000"
	I0520 04:16:27.300060   15226 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:16:27.300074   15226 fix.go:54] fixHost starting: 
	I0520 04:16:27.300749   15226 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
	W0520 04:16:27.300769   15226 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:16:27.306826   15226 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
	I0520 04:16:27.311311   15226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
	I0520 04:16:27.320280   15226 main.go:141] libmachine: STDOUT: 
	I0520 04:16:27.320338   15226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:16:27.320419   15226 fix.go:56] duration metric: took 20.345458ms for fixHost
	I0520 04:16:27.320428   15226 start.go:83] releasing machines lock for "functional-873000", held for 20.454333ms
	W0520 04:16:27.320613   15226 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:16:27.328135   15226 out.go:177] 
	W0520 04:16:27.332222   15226 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:16:27.332242   15226 out.go:239] * 
	W0520 04:16:27.334928   15226 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:16:27.342163   15226 out.go:177] 
	
	
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-873000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | -p download-only-078000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-078000                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| start   | -o=json --download-only                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | -p download-only-998000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-078000                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| start   | --download-only -p                                                       | binary-mirror-947000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | binary-mirror-947000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52781                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-947000                                                  | binary-mirror-947000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| addons  | enable dashboard -p                                                      | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | addons-313000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | addons-313000                                                            |                      |         |         |                     |                     |
| start   | -p addons-313000 --wait=true                                             | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-313000                                                         | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| start   | -p nospam-630000 -n=1 --memory=2250 --wait=false                         | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-630000                                                         | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | minikube-local-cache-test:functional-873000                              |                      |         |         |                     |                     |
| cache   | functional-873000 cache delete                                           | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | minikube-local-cache-test:functional-873000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
| ssh     | functional-873000 ssh sudo                                               | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-873000                                                        | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-873000 ssh                                                    | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-873000 cache reload                                           | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
| ssh     | functional-873000 ssh                                                    | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-873000 kubectl --                                             | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --context functional-873000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/20 04:16:22
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0520 04:16:22.194311   15226 out.go:291] Setting OutFile to fd 1 ...
I0520 04:16:22.194431   15226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:16:22.194433   15226 out.go:304] Setting ErrFile to fd 2...
I0520 04:16:22.194434   15226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:16:22.194548   15226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:16:22.195609   15226 out.go:298] Setting JSON to false
I0520 04:16:22.211653   15226 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8153,"bootTime":1716195629,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0520 04:16:22.211714   15226 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0520 04:16:22.216182   15226 out.go:177] * [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0520 04:16:22.225146   15226 out.go:177]   - MINIKUBE_LOCATION=18932
I0520 04:16:22.229145   15226 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
I0520 04:16:22.225198   15226 notify.go:220] Checking for updates...
I0520 04:16:22.236140   15226 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0520 04:16:22.243966   15226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0520 04:16:22.247204   15226 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
I0520 04:16:22.250241   15226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0520 04:16:22.253551   15226 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:16:22.253605   15226 driver.go:392] Setting default libvirt URI to qemu:///system
I0520 04:16:22.258133   15226 out.go:177] * Using the qemu2 driver based on existing profile
I0520 04:16:22.265169   15226 start.go:297] selected driver: qemu2
I0520 04:16:22.265173   15226 start.go:901] validating driver "qemu2" against &{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:16:22.265224   15226 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0520 04:16:22.267486   15226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0520 04:16:22.267507   15226 cni.go:84] Creating CNI manager for ""
I0520 04:16:22.267514   15226 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0520 04:16:22.267564   15226 start.go:340] cluster config:
{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:16:22.271868   15226 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0520 04:16:22.279179   15226 out.go:177] * Starting "functional-873000" primary control-plane node in "functional-873000" cluster
I0520 04:16:22.283196   15226 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0520 04:16:22.283212   15226 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0520 04:16:22.283223   15226 cache.go:56] Caching tarball of preloaded images
I0520 04:16:22.283285   15226 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0520 04:16:22.283289   15226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0520 04:16:22.283353   15226 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/functional-873000/config.json ...
I0520 04:16:22.283821   15226 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:16:22.283862   15226 start.go:364] duration metric: took 35.917µs to acquireMachinesLock for "functional-873000"
I0520 04:16:22.283871   15226 start.go:96] Skipping create...Using existing machine configuration
I0520 04:16:22.283876   15226 fix.go:54] fixHost starting: 
I0520 04:16:22.284012   15226 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
W0520 04:16:22.284020   15226 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:16:22.291056   15226 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
I0520 04:16:22.295205   15226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
I0520 04:16:22.297271   15226 main.go:141] libmachine: STDOUT: 
I0520 04:16:22.297287   15226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:16:22.297316   15226 fix.go:56] duration metric: took 13.441666ms for fixHost
I0520 04:16:22.297318   15226 start.go:83] releasing machines lock for "functional-873000", held for 13.453541ms
W0520 04:16:22.297324   15226 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:16:22.297355   15226 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:16:22.297359   15226 start.go:728] Will try again in 5 seconds ...
I0520 04:16:27.299606   15226 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:16:27.299961   15226 start.go:364] duration metric: took 296.25µs to acquireMachinesLock for "functional-873000"
I0520 04:16:27.300060   15226 start.go:96] Skipping create...Using existing machine configuration
I0520 04:16:27.300074   15226 fix.go:54] fixHost starting: 
I0520 04:16:27.300749   15226 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
W0520 04:16:27.300769   15226 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:16:27.306826   15226 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
I0520 04:16:27.311311   15226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
I0520 04:16:27.320280   15226 main.go:141] libmachine: STDOUT: 
I0520 04:16:27.320338   15226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:16:27.320419   15226 fix.go:56] duration metric: took 20.345458ms for fixHost
I0520 04:16:27.320428   15226 start.go:83] releasing machines lock for "functional-873000", held for 20.454333ms
W0520 04:16:27.320613   15226 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:16:27.328135   15226 out.go:177] 
W0520 04:16:27.332222   15226 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:16:27.332242   15226 out.go:239] * 
W0520 04:16:27.334928   15226 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:16:27.342163   15226 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1947970225/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | -p download-only-078000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-078000                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| start   | -o=json --download-only                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | -p download-only-998000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-078000                                                  | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| start   | --download-only -p                                                       | binary-mirror-947000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | binary-mirror-947000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52781                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-947000                                                  | binary-mirror-947000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| addons  | enable dashboard -p                                                      | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | addons-313000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | addons-313000                                                            |                      |         |         |                     |                     |
| start   | -p addons-313000 --wait=true                                             | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-313000                                                         | addons-313000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
| start   | -p nospam-630000 -n=1 --memory=2250 --wait=false                         | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-630000 --log_dir                                                  | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-630000                                                         | nospam-630000        | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-873000 cache add                                              | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | minikube-local-cache-test:functional-873000                              |                      |         |         |                     |                     |
| cache   | functional-873000 cache delete                                           | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | minikube-local-cache-test:functional-873000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
| ssh     | functional-873000 ssh sudo                                               | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-873000                                                        | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-873000 ssh                                                    | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-873000 cache reload                                           | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
| ssh     | functional-873000 ssh                                                    | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 04:16 PDT | 20 May 24 04:16 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-873000 kubectl --                                             | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --context functional-873000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-873000                                                     | functional-873000    | jenkins | v1.33.1 | 20 May 24 04:16 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/20 04:16:22
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0520 04:16:22.194311   15226 out.go:291] Setting OutFile to fd 1 ...
I0520 04:16:22.194431   15226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:16:22.194433   15226 out.go:304] Setting ErrFile to fd 2...
I0520 04:16:22.194434   15226 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:16:22.194548   15226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:16:22.195609   15226 out.go:298] Setting JSON to false
I0520 04:16:22.211653   15226 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8153,"bootTime":1716195629,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0520 04:16:22.211714   15226 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0520 04:16:22.216182   15226 out.go:177] * [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0520 04:16:22.225146   15226 out.go:177]   - MINIKUBE_LOCATION=18932
I0520 04:16:22.229145   15226 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
I0520 04:16:22.225198   15226 notify.go:220] Checking for updates...
I0520 04:16:22.236140   15226 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0520 04:16:22.243966   15226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0520 04:16:22.247204   15226 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
I0520 04:16:22.250241   15226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0520 04:16:22.253551   15226 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:16:22.253605   15226 driver.go:392] Setting default libvirt URI to qemu:///system
I0520 04:16:22.258133   15226 out.go:177] * Using the qemu2 driver based on existing profile
I0520 04:16:22.265169   15226 start.go:297] selected driver: qemu2
I0520 04:16:22.265173   15226 start.go:901] validating driver "qemu2" against &{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:16:22.265224   15226 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0520 04:16:22.267486   15226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0520 04:16:22.267507   15226 cni.go:84] Creating CNI manager for ""
I0520 04:16:22.267514   15226 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0520 04:16:22.267564   15226 start.go:340] cluster config:
{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 04:16:22.271868   15226 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0520 04:16:22.279179   15226 out.go:177] * Starting "functional-873000" primary control-plane node in "functional-873000" cluster
I0520 04:16:22.283196   15226 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0520 04:16:22.283212   15226 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0520 04:16:22.283223   15226 cache.go:56] Caching tarball of preloaded images
I0520 04:16:22.283285   15226 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0520 04:16:22.283289   15226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0520 04:16:22.283353   15226 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/functional-873000/config.json ...
I0520 04:16:22.283821   15226 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:16:22.283862   15226 start.go:364] duration metric: took 35.917µs to acquireMachinesLock for "functional-873000"
I0520 04:16:22.283871   15226 start.go:96] Skipping create...Using existing machine configuration
I0520 04:16:22.283876   15226 fix.go:54] fixHost starting: 
I0520 04:16:22.284012   15226 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
W0520 04:16:22.284020   15226 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:16:22.291056   15226 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
I0520 04:16:22.295205   15226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
I0520 04:16:22.297271   15226 main.go:141] libmachine: STDOUT: 
I0520 04:16:22.297287   15226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:16:22.297316   15226 fix.go:56] duration metric: took 13.441666ms for fixHost
I0520 04:16:22.297318   15226 start.go:83] releasing machines lock for "functional-873000", held for 13.453541ms
W0520 04:16:22.297324   15226 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:16:22.297355   15226 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:16:22.297359   15226 start.go:728] Will try again in 5 seconds ...
I0520 04:16:27.299606   15226 start.go:360] acquireMachinesLock for functional-873000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 04:16:27.299961   15226 start.go:364] duration metric: took 296.25µs to acquireMachinesLock for "functional-873000"
I0520 04:16:27.300060   15226 start.go:96] Skipping create...Using existing machine configuration
I0520 04:16:27.300074   15226 fix.go:54] fixHost starting: 
I0520 04:16:27.300749   15226 fix.go:112] recreateIfNeeded on functional-873000: state=Stopped err=<nil>
W0520 04:16:27.300769   15226 fix.go:138] unexpected machine state, will restart: <nil>
I0520 04:16:27.306826   15226 out.go:177] * Restarting existing qemu2 VM for "functional-873000" ...
I0520 04:16:27.311311   15226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5c:7c:36:10:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/functional-873000/disk.qcow2
I0520 04:16:27.320280   15226 main.go:141] libmachine: STDOUT: 
I0520 04:16:27.320338   15226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 04:16:27.320419   15226 fix.go:56] duration metric: took 20.345458ms for fixHost
I0520 04:16:27.320428   15226 start.go:83] releasing machines lock for "functional-873000", held for 20.454333ms
W0520 04:16:27.320613   15226 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 04:16:27.328135   15226 out.go:177] 
W0520 04:16:27.332222   15226 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 04:16:27.332242   15226 out.go:239] * 
W0520 04:16:27.334928   15226 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:16:27.342163   15226 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-873000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-873000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.053042ms)

                                                
                                                
** stderr ** 
	error: context "functional-873000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-873000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-873000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-873000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-873000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-873000 --alsologtostderr -v=1] stderr:
I0520 04:17:16.670033   15559 out.go:291] Setting OutFile to fd 1 ...
I0520 04:17:16.670447   15559 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:16.670450   15559 out.go:304] Setting ErrFile to fd 2...
I0520 04:17:16.670453   15559 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:16.670609   15559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:17:16.670817   15559 mustload.go:65] Loading cluster: functional-873000
I0520 04:17:16.671013   15559 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:17:16.673995   15559 out.go:177] * The control-plane node functional-873000 host is not running: state=Stopped
I0520 04:17:16.678001   15559 out.go:177]   To start a cluster, run: "minikube start -p functional-873000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (41.562125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 status: exit status 7 (29.830917ms)

                                                
                                                
-- stdout --
	functional-873000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-873000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.210042ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-873000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 status -o json: exit status 7 (29.05475ms)

                                                
                                                
-- stdout --
	{"Name":"functional-873000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-873000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.9825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-873000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-873000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.83425ms)

                                                
                                                
** stderr ** 
	error: context "functional-873000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-873000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-873000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-873000 describe po hello-node-connect: exit status 1 (26.298875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-873000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-873000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-873000 logs -l app=hello-node-connect: exit status 1 (26.109875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-873000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-873000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-873000 describe svc hello-node-connect: exit status 1 (27.212375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-873000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (30.141541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-873000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.251084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "echo hello": exit status 83 (43.548791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n"*. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "cat /etc/hostname": exit status 83 (44.898834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-873000"- but got *"* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n"*. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (31.711625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.307625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-873000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.916333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-873000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-873000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cp functional-873000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd232912973/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 cp functional-873000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd232912973/001/cp-test.txt: exit status 83 (39.7875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-873000 cp functional-873000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd232912973/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.630417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd232912973/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.868ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-873000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (39.962125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-873000 ssh -n functional-873000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-873000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-873000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14895/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/test/nested/copy/14895/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/test/nested/copy/14895/hosts": exit status 83 (45.325959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/test/nested/copy/14895/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-873000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-873000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.36275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14895.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/14895.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/14895.pem": exit status 83 (40.997625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/14895.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo cat /etc/ssl/certs/14895.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/14895.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-873000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-873000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14895.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /usr/share/ca-certificates/14895.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /usr/share/ca-certificates/14895.pem": exit status 83 (44.831416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/14895.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo cat /usr/share/ca-certificates/14895.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/14895.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-873000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-873000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.58175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-873000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-873000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/148952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/148952.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/148952.pem": exit status 83 (38.862667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/148952.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo cat /etc/ssl/certs/148952.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/148952.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-873000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-873000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/148952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /usr/share/ca-certificates/148952.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /usr/share/ca-certificates/148952.pem": exit status 83 (38.733292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/148952.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo cat /usr/share/ca-certificates/148952.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/148952.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-873000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-873000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (44.641833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-873000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-873000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.1805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-873000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-873000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.051584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-873000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-873000 -n functional-873000: exit status 7 (29.418125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo systemctl is-active crio": exit status 83 (43.096708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 version -o=json --components: exit status 83 (40.831583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-873000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-873000 image ls --format short --alsologtostderr:
I0520 04:17:17.067943   15574 out.go:291] Setting OutFile to fd 1 ...
I0520 04:17:17.068106   15574 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.068109   15574 out.go:304] Setting ErrFile to fd 2...
I0520 04:17:17.068111   15574 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.068224   15574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:17:17.068707   15574 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:17:17.068770   15574 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-873000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-873000 image ls --format table --alsologtostderr:
I0520 04:17:17.173368   15580 out.go:291] Setting OutFile to fd 1 ...
I0520 04:17:17.173529   15580 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.173532   15580 out.go:304] Setting ErrFile to fd 2...
I0520 04:17:17.173534   15580 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.173653   15580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:17:17.174051   15580 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:17:17.174120   15580 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-873000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-873000 image ls --format json --alsologtostderr:
I0520 04:17:17.138554   15578 out.go:291] Setting OutFile to fd 1 ...
I0520 04:17:17.138729   15578 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.138732   15578 out.go:304] Setting ErrFile to fd 2...
I0520 04:17:17.138735   15578 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.138847   15578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:17:17.139230   15578 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:17:17.139291   15578 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-873000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-873000 image ls --format yaml --alsologtostderr:
I0520 04:17:17.103715   15576 out.go:291] Setting OutFile to fd 1 ...
I0520 04:17:17.103880   15576 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.103883   15576 out.go:304] Setting ErrFile to fd 2...
I0520 04:17:17.103885   15576 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.103995   15576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:17:17.104414   15576 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:17:17.104471   15576 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh pgrep buildkitd: exit status 83 (40.679667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image build -t localhost/my-image:functional-873000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-873000 image build -t localhost/my-image:functional-873000 testdata/build --alsologtostderr:
I0520 04:17:17.248650   15584 out.go:291] Setting OutFile to fd 1 ...
I0520 04:17:17.249039   15584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.249042   15584 out.go:304] Setting ErrFile to fd 2...
I0520 04:17:17.249045   15584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:17:17.249194   15584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:17:17.249593   15584 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:17:17.250031   15584 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:17:17.250263   15584 build_images.go:133] succeeded building to: 
I0520 04:17:17.250266   15584 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls
functional_test.go:442: expected "localhost/my-image:functional-873000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-873000 docker-env) && out/minikube-darwin-arm64 status -p functional-873000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-873000 docker-env) && out/minikube-darwin-arm64 status -p functional-873000": exit status 1 (45.722084ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2: exit status 83 (40.723167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:17:16.941232   15568 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:17:16.942250   15568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.942253   15568 out.go:304] Setting ErrFile to fd 2...
	I0520 04:17:16.942256   15568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.942406   15568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:17:16.942607   15568 mustload.go:65] Loading cluster: functional-873000
	I0520 04:17:16.942789   15568 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:17:16.946129   15568 out.go:177] * The control-plane node functional-873000 host is not running: state=Stopped
	I0520 04:17:16.950002   15568 out.go:177]   To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2: exit status 83 (41.811084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:17:17.025317   15572 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:17:17.025474   15572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:17.025477   15572 out.go:304] Setting ErrFile to fd 2...
	I0520 04:17:17.025479   15572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:17.025607   15572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:17:17.025815   15572 mustload.go:65] Loading cluster: functional-873000
	I0520 04:17:17.026049   15572 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:17:17.030993   15572 out.go:177] * The control-plane node functional-873000 host is not running: state=Stopped
	I0520 04:17:17.035022   15572 out.go:177]   To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2: exit status 83 (41.738541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:17:16.982842   15570 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:17:16.982980   15570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.982983   15570 out.go:304] Setting ErrFile to fd 2...
	I0520 04:17:16.982985   15570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.983104   15570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:17:16.983307   15570 mustload.go:65] Loading cluster: functional-873000
	I0520 04:17:16.983505   15570 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:17:16.987872   15570 out.go:177] * The control-plane node functional-873000 host is not running: state=Stopped
	I0520 04:17:16.992055   15570 out.go:177]   To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-873000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-873000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-873000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.157125ms)

                                                
                                                
** stderr ** 
	error: context "functional-873000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-873000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 service list: exit status 83 (41.517458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-873000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 service list -o json: exit status 83 (43.887334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-873000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 service --namespace=default --https --url hello-node: exit status 83 (44.697541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-873000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 service hello-node --url --format={{.IP}}: exit status 83 (45.99425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-873000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 service hello-node --url: exit status 83 (41.802417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-873000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test.go:1565: failed to parse "* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"": parse "* The control-plane node functional-873000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-873000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0520 04:16:29.128590   15344 out.go:291] Setting OutFile to fd 1 ...
I0520 04:16:29.128759   15344 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:16:29.128763   15344 out.go:304] Setting ErrFile to fd 2...
I0520 04:16:29.128766   15344 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:16:29.128921   15344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:16:29.129152   15344 mustload.go:65] Loading cluster: functional-873000
I0520 04:16:29.129400   15344 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:16:29.138195   15344 out.go:177] * The control-plane node functional-873000 host is not running: state=Stopped
I0520 04:16:29.146212   15344 out.go:177]   To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
stdout: * The control-plane node functional-873000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-873000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 15345: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-873000": client config: context "functional-873000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (119.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-873000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-873000 get svc nginx-svc: exit status 1 (67.887209ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-873000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-873000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (119.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image load --daemon gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-873000 image load --daemon gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr: (1.379659042s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-873000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image load --daemon gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-873000 image load --daemon gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr: (1.433068083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-873000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.306873292s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-873000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image load --daemon gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-873000 image load --daemon gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr: (1.2152985s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-873000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image save gcr.io/google-containers/addon-resizer:functional-873000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-873000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.030795292s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-903000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-903000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.062613375s)

                                                
                                                
-- stdout --
	* [ha-903000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-903000" primary control-plane node in "ha-903000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-903000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:19:30.215451   15651 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:19:30.215573   15651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:19:30.215576   15651 out.go:304] Setting ErrFile to fd 2...
	I0520 04:19:30.215578   15651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:19:30.215696   15651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:19:30.216686   15651 out.go:298] Setting JSON to false
	I0520 04:19:30.232678   15651 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8341,"bootTime":1716195629,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:19:30.232735   15651 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:19:30.237009   15651 out.go:177] * [ha-903000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:19:30.245052   15651 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:19:30.245112   15651 notify.go:220] Checking for updates...
	I0520 04:19:30.249092   15651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:19:30.252187   15651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:19:30.254969   15651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:19:30.258027   15651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:19:30.261067   15651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:19:30.264112   15651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:19:30.268061   15651 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:19:30.275015   15651 start.go:297] selected driver: qemu2
	I0520 04:19:30.275022   15651 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:19:30.275029   15651 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:19:30.277279   15651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:19:30.280061   15651 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:19:30.283175   15651 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:19:30.283190   15651 cni.go:84] Creating CNI manager for ""
	I0520 04:19:30.283194   15651 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:19:30.283198   15651 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:19:30.283238   15651 start.go:340] cluster config:
	{Name:ha-903000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:19:30.287944   15651 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:19:30.295049   15651 out.go:177] * Starting "ha-903000" primary control-plane node in "ha-903000" cluster
	I0520 04:19:30.298987   15651 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:19:30.299002   15651 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:19:30.299014   15651 cache.go:56] Caching tarball of preloaded images
	I0520 04:19:30.299072   15651 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:19:30.299077   15651 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:19:30.299271   15651 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/ha-903000/config.json ...
	I0520 04:19:30.299282   15651 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/ha-903000/config.json: {Name:mke7c4cd35691e48e0d7d5e73f972c698c8c03a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:19:30.299493   15651 start.go:360] acquireMachinesLock for ha-903000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:19:30.299527   15651 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "ha-903000"
	I0520 04:19:30.299539   15651 start.go:93] Provisioning new machine with config: &{Name:ha-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:19:30.299569   15651 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:19:30.308062   15651 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:19:30.326990   15651 start.go:159] libmachine.API.Create for "ha-903000" (driver="qemu2")
	I0520 04:19:30.327021   15651 client.go:168] LocalClient.Create starting
	I0520 04:19:30.327083   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:19:30.327114   15651 main.go:141] libmachine: Decoding PEM data...
	I0520 04:19:30.327124   15651 main.go:141] libmachine: Parsing certificate...
	I0520 04:19:30.327164   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:19:30.327187   15651 main.go:141] libmachine: Decoding PEM data...
	I0520 04:19:30.327193   15651 main.go:141] libmachine: Parsing certificate...
	I0520 04:19:30.327544   15651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:19:30.714192   15651 main.go:141] libmachine: Creating SSH key...
	I0520 04:19:30.749901   15651 main.go:141] libmachine: Creating Disk image...
	I0520 04:19:30.749907   15651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:19:30.750136   15651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:19:30.762830   15651 main.go:141] libmachine: STDOUT: 
	I0520 04:19:30.762849   15651 main.go:141] libmachine: STDERR: 
	I0520 04:19:30.762900   15651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2 +20000M
	I0520 04:19:30.773693   15651 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:19:30.773709   15651 main.go:141] libmachine: STDERR: 
	I0520 04:19:30.773722   15651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:19:30.773727   15651 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:19:30.773749   15651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4e:ea:21:52:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:19:30.775440   15651 main.go:141] libmachine: STDOUT: 
	I0520 04:19:30.775454   15651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:19:30.775472   15651 client.go:171] duration metric: took 448.45175ms to LocalClient.Create
	I0520 04:19:32.777617   15651 start.go:128] duration metric: took 2.478055208s to createHost
	I0520 04:19:32.777676   15651 start.go:83] releasing machines lock for "ha-903000", held for 2.478170541s
	W0520 04:19:32.777742   15651 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:19:32.793686   15651 out.go:177] * Deleting "ha-903000" in qemu2 ...
	W0520 04:19:32.815885   15651 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:19:32.815906   15651 start.go:728] Will try again in 5 seconds ...
	I0520 04:19:37.818032   15651 start.go:360] acquireMachinesLock for ha-903000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:19:37.818448   15651 start.go:364] duration metric: took 321.25µs to acquireMachinesLock for "ha-903000"
	I0520 04:19:37.818554   15651 start.go:93] Provisioning new machine with config: &{Name:ha-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:19:37.818809   15651 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:19:37.830136   15651 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:19:37.879080   15651 start.go:159] libmachine.API.Create for "ha-903000" (driver="qemu2")
	I0520 04:19:37.879120   15651 client.go:168] LocalClient.Create starting
	I0520 04:19:37.879230   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:19:37.879299   15651 main.go:141] libmachine: Decoding PEM data...
	I0520 04:19:37.879317   15651 main.go:141] libmachine: Parsing certificate...
	I0520 04:19:37.879390   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:19:37.879434   15651 main.go:141] libmachine: Decoding PEM data...
	I0520 04:19:37.879446   15651 main.go:141] libmachine: Parsing certificate...
	I0520 04:19:37.880019   15651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:19:38.017162   15651 main.go:141] libmachine: Creating SSH key...
	I0520 04:19:38.183884   15651 main.go:141] libmachine: Creating Disk image...
	I0520 04:19:38.183891   15651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:19:38.184070   15651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:19:38.196543   15651 main.go:141] libmachine: STDOUT: 
	I0520 04:19:38.196563   15651 main.go:141] libmachine: STDERR: 
	I0520 04:19:38.196621   15651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2 +20000M
	I0520 04:19:38.207424   15651 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:19:38.207445   15651 main.go:141] libmachine: STDERR: 
	I0520 04:19:38.207455   15651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:19:38.207459   15651 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:19:38.207487   15651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:92:a8:5f:d5:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:19:38.209195   15651 main.go:141] libmachine: STDOUT: 
	I0520 04:19:38.209209   15651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:19:38.209220   15651 client.go:171] duration metric: took 330.095583ms to LocalClient.Create
	I0520 04:19:40.210487   15651 start.go:128] duration metric: took 2.391682208s to createHost
	I0520 04:19:40.210541   15651 start.go:83] releasing machines lock for "ha-903000", held for 2.392089625s
	W0520 04:19:40.210889   15651 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-903000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:19:40.220811   15651 out.go:177] 
	W0520 04:19:40.225886   15651 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:19:40.225919   15651 out.go:239] * 
	* 
	W0520 04:19:40.228379   15651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:19:40.237272   15651 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-903000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (64.270042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (104.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (57.752667ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-903000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- rollout status deployment/busybox: exit status 1 (53.84775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (53.774083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.399833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.744833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.72625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.387666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.7735ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.754417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.178541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.371792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.432666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.113416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.308917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.593958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.526167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.621208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (28.569584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (104.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-903000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.198292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-903000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (29.481542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-903000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-903000 -v=7 --alsologtostderr: exit status 83 (40.910792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-903000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:24.831054   15765 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:24.831214   15765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:24.831224   15765 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:24.831226   15765 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:24.831351   15765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:24.831580   15765 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:24.831785   15765 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:24.835986   15765 out.go:177] * The control-plane node ha-903000 host is not running: state=Stopped
	I0520 04:21:24.839903   15765 out.go:177]   To start a cluster, run: "minikube start -p ha-903000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-903000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (29.130459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-903000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-903000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.81275ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-903000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-903000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-903000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (29.397666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-903000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-903000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (28.715583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status --output json -v=7 --alsologtostderr: exit status 7 (29.248916ms)

                                                
                                                
-- stdout --
	{"Name":"ha-903000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:25.057169   15779 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:25.057333   15779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.057336   15779 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:25.057338   15779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.057483   15779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:25.057612   15779 out.go:298] Setting JSON to true
	I0520 04:21:25.057621   15779 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:25.057687   15779 notify.go:220] Checking for updates...
	I0520 04:21:25.057822   15779 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:25.057829   15779 status.go:255] checking status of ha-903000 ...
	I0520 04:21:25.058049   15779 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:25.058053   15779 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:25.058056   15779 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-903000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (28.919416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.054792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:25.115732   15783 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:25.115895   15783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.115898   15783 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:25.115901   15783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.116013   15783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:25.116258   15783 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:25.116448   15783 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:25.120740   15783 out.go:177] 
	W0520 04:21:25.124707   15783 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0520 04:21:25.124716   15783 out.go:239] * 
	* 
	W0520 04:21:25.127031   15783 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:21:25.129662   15783 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-903000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (29.408666ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:25.162372   15785 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:25.162531   15785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.162534   15785 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:25.162536   15785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.162684   15785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:25.162797   15785 out.go:298] Setting JSON to false
	I0520 04:21:25.162805   15785 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:25.162870   15785 notify.go:220] Checking for updates...
	I0520 04:21:25.163006   15785 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:25.163012   15785 status.go:255] checking status of ha-903000 ...
	I0520 04:21:25.163222   15785 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:25.163225   15785 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:25.163228   15785 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr": ha-903000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr": ha-903000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr": ha-903000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr": ha-903000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (29.088458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-903000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (28.945209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.363458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:25.317298   15795 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:25.317454   15795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.317458   15795 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:25.317460   15795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.317581   15795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:25.317879   15795 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:25.318078   15795 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:25.322203   15795 out.go:177] 
	W0520 04:21:25.326279   15795 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0520 04:21:25.326283   15795 out.go:239] * 
	* 
	W0520 04:21:25.328571   15795 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:21:25.333206   15795 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0520 04:21:25.317298   15795 out.go:291] Setting OutFile to fd 1 ...
I0520 04:21:25.317454   15795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:21:25.317458   15795 out.go:304] Setting ErrFile to fd 2...
I0520 04:21:25.317460   15795 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:21:25.317581   15795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:21:25.317879   15795 mustload.go:65] Loading cluster: ha-903000
I0520 04:21:25.318078   15795 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:21:25.322203   15795 out.go:177] 
W0520 04:21:25.326279   15795 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0520 04:21:25.326283   15795 out.go:239] * 
* 
W0520 04:21:25.328571   15795 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:21:25.333206   15795 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-903000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (29.19425ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:25.365723   15797 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:25.365878   15797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.365882   15797 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:25.365884   15797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:25.366013   15797 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:25.366139   15797 out.go:298] Setting JSON to false
	I0520 04:21:25.366148   15797 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:25.366205   15797 notify.go:220] Checking for updates...
	I0520 04:21:25.366343   15797 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:25.366351   15797 status.go:255] checking status of ha-903000 ...
	I0520 04:21:25.366580   15797 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:25.366584   15797 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:25.366586   15797 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (72.845167ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:26.805368   15799 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:26.805562   15799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:26.805567   15799 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:26.805570   15799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:26.805740   15799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:26.805896   15799 out.go:298] Setting JSON to false
	I0520 04:21:26.805907   15799 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:26.805939   15799 notify.go:220] Checking for updates...
	I0520 04:21:26.806172   15799 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:26.806181   15799 status.go:255] checking status of ha-903000 ...
	I0520 04:21:26.806467   15799 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:26.806472   15799 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:26.806475   15799 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (74.884875ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:27.703996   15801 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:27.704211   15801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:27.704215   15801 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:27.704218   15801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:27.704390   15801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:27.704551   15801 out.go:298] Setting JSON to false
	I0520 04:21:27.704562   15801 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:27.704605   15801 notify.go:220] Checking for updates...
	I0520 04:21:27.704841   15801 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:27.704849   15801 status.go:255] checking status of ha-903000 ...
	I0520 04:21:27.705132   15801 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:27.705137   15801 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:27.705140   15801 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (46.537375ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:30.780043   15803 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:30.780178   15803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:30.780181   15803 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:30.780183   15803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:30.780308   15803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:30.780434   15803 out.go:298] Setting JSON to false
	I0520 04:21:30.780443   15803 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:30.780481   15803 notify.go:220] Checking for updates...
	I0520 04:21:30.780621   15803 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:30.780629   15803 status.go:255] checking status of ha-903000 ...
	I0520 04:21:30.780841   15803 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:30.780845   15803 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:30.780847   15803 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (72.528375ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:35.155564   15811 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:35.155781   15811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:35.155786   15811 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:35.155789   15811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:35.155965   15811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:35.156128   15811 out.go:298] Setting JSON to false
	I0520 04:21:35.156139   15811 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:35.156173   15811 notify.go:220] Checking for updates...
	I0520 04:21:35.156417   15811 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:35.156425   15811 status.go:255] checking status of ha-903000 ...
	I0520 04:21:35.156707   15811 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:35.156713   15811 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:35.156715   15811 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (73.673333ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:40.103169   15815 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:40.103404   15815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:40.103408   15815 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:40.103411   15815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:40.103573   15815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:40.103744   15815 out.go:298] Setting JSON to false
	I0520 04:21:40.103756   15815 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:40.103804   15815 notify.go:220] Checking for updates...
	I0520 04:21:40.104050   15815 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:40.104059   15815 status.go:255] checking status of ha-903000 ...
	I0520 04:21:40.104346   15815 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:40.104351   15815 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:40.104354   15815 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (76.532958ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:21:47.902819   15820 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:21:47.903030   15820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:47.903035   15820 out.go:304] Setting ErrFile to fd 2...
	I0520 04:21:47.903039   15820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:21:47.903224   15820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:21:47.903398   15820 out.go:298] Setting JSON to false
	I0520 04:21:47.903412   15820 mustload.go:65] Loading cluster: ha-903000
	I0520 04:21:47.903451   15820 notify.go:220] Checking for updates...
	I0520 04:21:47.903710   15820 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:21:47.903720   15820 status.go:255] checking status of ha-903000 ...
	I0520 04:21:47.904008   15820 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:21:47.904014   15820 status.go:343] host is not running, skipping remaining checks
	I0520 04:21:47.904017   15820 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (72.731666ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:22:04.993053   15834 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:22:04.993255   15834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:04.993259   15834 out.go:304] Setting ErrFile to fd 2...
	I0520 04:22:04.993262   15834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:04.993447   15834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:22:04.993613   15834 out.go:298] Setting JSON to false
	I0520 04:22:04.993626   15834 mustload.go:65] Loading cluster: ha-903000
	I0520 04:22:04.993666   15834 notify.go:220] Checking for updates...
	I0520 04:22:04.993897   15834 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:22:04.993906   15834 status.go:255] checking status of ha-903000 ...
	I0520 04:22:04.994202   15834 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:22:04.994207   15834 status.go:343] host is not running, skipping remaining checks
	I0520 04:22:04.994210   15834 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (32.569291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (39.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-903000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-903000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (29.025667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-903000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-903000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-903000 -v=7 --alsologtostderr: (1.921809542s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-903000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-903000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.21620375s)

                                                
                                                
-- stdout --
	* [ha-903000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-903000" primary control-plane node in "ha-903000" cluster
	* Restarting existing qemu2 VM for "ha-903000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-903000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:22:07.138347   15858 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:22:07.138510   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:07.138514   15858 out.go:304] Setting ErrFile to fd 2...
	I0520 04:22:07.138517   15858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:07.138679   15858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:22:07.139919   15858 out.go:298] Setting JSON to false
	I0520 04:22:07.159348   15858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8498,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:22:07.159418   15858 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:22:07.164009   15858 out.go:177] * [ha-903000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:22:07.170830   15858 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:22:07.170874   15858 notify.go:220] Checking for updates...
	I0520 04:22:07.173811   15858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:22:07.176886   15858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:22:07.179909   15858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:22:07.181206   15858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:22:07.183867   15858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:22:07.187218   15858 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:22:07.187278   15858 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:22:07.191723   15858 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:22:07.198838   15858 start.go:297] selected driver: qemu2
	I0520 04:22:07.198845   15858 start.go:901] validating driver "qemu2" against &{Name:ha-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:22:07.198908   15858 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:22:07.201348   15858 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:22:07.201378   15858 cni.go:84] Creating CNI manager for ""
	I0520 04:22:07.201383   15858 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:22:07.201434   15858 start.go:340] cluster config:
	{Name:ha-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-903000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:22:07.206045   15858 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:22:07.212844   15858 out.go:177] * Starting "ha-903000" primary control-plane node in "ha-903000" cluster
	I0520 04:22:07.216885   15858 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:22:07.216904   15858 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:22:07.216915   15858 cache.go:56] Caching tarball of preloaded images
	I0520 04:22:07.216969   15858 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:22:07.216974   15858 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:22:07.217029   15858 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/ha-903000/config.json ...
	I0520 04:22:07.217453   15858 start.go:360] acquireMachinesLock for ha-903000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:22:07.217493   15858 start.go:364] duration metric: took 32.834µs to acquireMachinesLock for "ha-903000"
	I0520 04:22:07.217504   15858 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:22:07.217511   15858 fix.go:54] fixHost starting: 
	I0520 04:22:07.217639   15858 fix.go:112] recreateIfNeeded on ha-903000: state=Stopped err=<nil>
	W0520 04:22:07.217648   15858 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:22:07.224904   15858 out.go:177] * Restarting existing qemu2 VM for "ha-903000" ...
	I0520 04:22:07.227921   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:92:a8:5f:d5:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:22:07.230018   15858 main.go:141] libmachine: STDOUT: 
	I0520 04:22:07.230039   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:22:07.230074   15858 fix.go:56] duration metric: took 12.563709ms for fixHost
	I0520 04:22:07.230079   15858 start.go:83] releasing machines lock for "ha-903000", held for 12.58125ms
	W0520 04:22:07.230086   15858 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:22:07.230116   15858 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:22:07.230121   15858 start.go:728] Will try again in 5 seconds ...
	I0520 04:22:12.232233   15858 start.go:360] acquireMachinesLock for ha-903000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:22:12.232674   15858 start.go:364] duration metric: took 339.708µs to acquireMachinesLock for "ha-903000"
	I0520 04:22:12.232776   15858 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:22:12.232795   15858 fix.go:54] fixHost starting: 
	I0520 04:22:12.233611   15858 fix.go:112] recreateIfNeeded on ha-903000: state=Stopped err=<nil>
	W0520 04:22:12.233645   15858 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:22:12.238959   15858 out.go:177] * Restarting existing qemu2 VM for "ha-903000" ...
	I0520 04:22:12.248058   15858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:92:a8:5f:d5:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:22:12.257127   15858 main.go:141] libmachine: STDOUT: 
	I0520 04:22:12.257209   15858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:22:12.257293   15858 fix.go:56] duration metric: took 24.495ms for fixHost
	I0520 04:22:12.257311   15858 start.go:83] releasing machines lock for "ha-903000", held for 24.616042ms
	W0520 04:22:12.257501   15858 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-903000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-903000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:22:12.265871   15858 out.go:177] 
	W0520 04:22:12.269740   15858 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:22:12.269786   15858 out.go:239] * 
	* 
	W0520 04:22:12.272631   15858 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:22:12.281790   15858 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-903000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-903000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (32.1195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.70875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-903000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:22:12.422401   15873 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:22:12.422539   15873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:12.422542   15873 out.go:304] Setting ErrFile to fd 2...
	I0520 04:22:12.422544   15873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:12.422669   15873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:22:12.422875   15873 mustload.go:65] Loading cluster: ha-903000
	I0520 04:22:12.423043   15873 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:22:12.426956   15873 out.go:177] * The control-plane node ha-903000 host is not running: state=Stopped
	I0520 04:22:12.434775   15873 out.go:177]   To start a cluster, run: "minikube start -p ha-903000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-903000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (28.199959ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:22:12.467142   15875 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:22:12.467262   15875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:12.467265   15875 out.go:304] Setting ErrFile to fd 2...
	I0520 04:22:12.467268   15875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:12.467391   15875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:22:12.467506   15875 out.go:298] Setting JSON to false
	I0520 04:22:12.467516   15875 mustload.go:65] Loading cluster: ha-903000
	I0520 04:22:12.467583   15875 notify.go:220] Checking for updates...
	I0520 04:22:12.467727   15875 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:22:12.467735   15875 status.go:255] checking status of ha-903000 ...
	I0520 04:22:12.467940   15875 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:22:12.467945   15875 status.go:343] host is not running, skipping remaining checks
	I0520 04:22:12.467947   15875 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (27.901875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-903000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (28.032583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-903000 stop -v=7 --alsologtostderr: (3.599470083s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr: exit status 7 (68.951042ms)

                                                
                                                
-- stdout --
	ha-903000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:22:16.261543   15907 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:22:16.261717   15907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:16.261721   15907 out.go:304] Setting ErrFile to fd 2...
	I0520 04:22:16.261724   15907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:16.261891   15907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:22:16.262041   15907 out.go:298] Setting JSON to false
	I0520 04:22:16.262053   15907 mustload.go:65] Loading cluster: ha-903000
	I0520 04:22:16.262093   15907 notify.go:220] Checking for updates...
	I0520 04:22:16.262349   15907 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:22:16.262358   15907 status.go:255] checking status of ha-903000 ...
	I0520 04:22:16.262637   15907 status.go:330] ha-903000 host status = "Stopped" (err=<nil>)
	I0520 04:22:16.262642   15907 status.go:343] host is not running, skipping remaining checks
	I0520 04:22:16.262645   15907 status.go:257] ha-903000 status: &{Name:ha-903000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr": ha-903000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr": ha-903000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-903000 status -v=7 --alsologtostderr": ha-903000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (31.21775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-903000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-903000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.18943625s)

                                                
                                                
-- stdout --
	* [ha-903000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-903000" primary control-plane node in "ha-903000" cluster
	* Restarting existing qemu2 VM for "ha-903000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-903000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:22:16.321034   15911 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:22:16.321197   15911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:16.321200   15911 out.go:304] Setting ErrFile to fd 2...
	I0520 04:22:16.321203   15911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:16.321314   15911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:22:16.322321   15911 out.go:298] Setting JSON to false
	I0520 04:22:16.338029   15911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8507,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:22:16.338082   15911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:22:16.342610   15911 out.go:177] * [ha-903000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:22:16.349566   15911 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:22:16.349623   15911 notify.go:220] Checking for updates...
	I0520 04:22:16.356546   15911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:22:16.359581   15911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:22:16.362602   15911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:22:16.365451   15911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:22:16.368568   15911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:22:16.371862   15911 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:22:16.372149   15911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:22:16.375454   15911 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:22:16.382545   15911 start.go:297] selected driver: qemu2
	I0520 04:22:16.382551   15911 start.go:901] validating driver "qemu2" against &{Name:ha-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:22:16.382600   15911 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:22:16.384723   15911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:22:16.384747   15911 cni.go:84] Creating CNI manager for ""
	I0520 04:22:16.384751   15911 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:22:16.384799   15911 start.go:340] cluster config:
	{Name:ha-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-903000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:22:16.388928   15911 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:22:16.396555   15911 out.go:177] * Starting "ha-903000" primary control-plane node in "ha-903000" cluster
	I0520 04:22:16.400590   15911 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:22:16.400605   15911 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:22:16.400615   15911 cache.go:56] Caching tarball of preloaded images
	I0520 04:22:16.400672   15911 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:22:16.400677   15911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:22:16.400727   15911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/ha-903000/config.json ...
	I0520 04:22:16.401123   15911 start.go:360] acquireMachinesLock for ha-903000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:22:16.401153   15911 start.go:364] duration metric: took 23.166µs to acquireMachinesLock for "ha-903000"
	I0520 04:22:16.401163   15911 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:22:16.401169   15911 fix.go:54] fixHost starting: 
	I0520 04:22:16.401289   15911 fix.go:112] recreateIfNeeded on ha-903000: state=Stopped err=<nil>
	W0520 04:22:16.401297   15911 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:22:16.416541   15911 out.go:177] * Restarting existing qemu2 VM for "ha-903000" ...
	I0520 04:22:16.420464   15911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:92:a8:5f:d5:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:22:16.422564   15911 main.go:141] libmachine: STDOUT: 
	I0520 04:22:16.422586   15911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:22:16.422619   15911 fix.go:56] duration metric: took 21.449584ms for fixHost
	I0520 04:22:16.422623   15911 start.go:83] releasing machines lock for "ha-903000", held for 21.4655ms
	W0520 04:22:16.422630   15911 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:22:16.422669   15911 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:22:16.422673   15911 start.go:728] Will try again in 5 seconds ...
	I0520 04:22:21.424856   15911 start.go:360] acquireMachinesLock for ha-903000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:22:21.425244   15911 start.go:364] duration metric: took 269.708µs to acquireMachinesLock for "ha-903000"
	I0520 04:22:21.425373   15911 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:22:21.425395   15911 fix.go:54] fixHost starting: 
	I0520 04:22:21.426094   15911 fix.go:112] recreateIfNeeded on ha-903000: state=Stopped err=<nil>
	W0520 04:22:21.426125   15911 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:22:21.435480   15911 out.go:177] * Restarting existing qemu2 VM for "ha-903000" ...
	I0520 04:22:21.440745   15911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:92:a8:5f:d5:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/ha-903000/disk.qcow2
	I0520 04:22:21.450192   15911 main.go:141] libmachine: STDOUT: 
	I0520 04:22:21.450258   15911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:22:21.450373   15911 fix.go:56] duration metric: took 24.981958ms for fixHost
	I0520 04:22:21.450391   15911 start.go:83] releasing machines lock for "ha-903000", held for 25.122333ms
	W0520 04:22:21.450565   15911 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-903000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-903000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:22:21.457531   15911 out.go:177] 
	W0520 04:22:21.461403   15911 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:22:21.461461   15911 out.go:239] * 
	* 
	W0520 04:22:21.464182   15911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:22:21.471481   15911 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-903000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (64.795458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-903000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (27.601541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-903000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-903000 --control-plane -v=7 --alsologtostderr: exit status 83 (38.691625ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-903000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-903000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:22:21.674883   15929 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:22:21.675028   15929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:21.675031   15929 out.go:304] Setting ErrFile to fd 2...
	I0520 04:22:21.675033   15929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:22:21.675157   15929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:22:21.675375   15929 mustload.go:65] Loading cluster: ha-903000
	I0520 04:22:21.675559   15929 config.go:182] Loaded profile config "ha-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:22:21.678442   15929 out.go:177] * The control-plane node ha-903000 host is not running: state=Stopped
	I0520 04:22:21.682492   15929 out.go:177]   To start a cluster, run: "minikube start -p ha-903000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-903000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (28.122459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-903000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-903000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-903000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-903000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-903000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-903000 -n ha-903000: exit status 7 (28.362125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-903000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-723000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-723000 --driver=qemu2 : exit status 80 (9.873135542s)

                                                
                                                
-- stdout --
	* [image-723000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-723000" primary control-plane node in "image-723000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-723000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-723000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-723000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-723000 -n image-723000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-723000 -n image-723000: exit status 7 (66.842375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-723000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-152000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-152000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.785985125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1ef477fd-5bc6-49ef-9f1a-a31f9fe29106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-152000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"721e9ead-7914-46cb-bd1d-85c62c0d4b4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18932"}}
	{"specversion":"1.0","id":"c86fed0d-5363-4b19-a876-08bb8906c748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig"}}
	{"specversion":"1.0","id":"ccedf683-4708-4d42-91bb-294adbccd509","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"304f5f4f-b72e-4767-9125-3a351adc5f1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"920f4f82-9772-4204-bcce-0579dcf41389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube"}}
	{"specversion":"1.0","id":"7dde4343-e8a2-40a4-b4c3-b29df4f3ceaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3e33136a-4ede-4330-81ca-11de6759daaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"23d2333c-e95a-4bd2-947d-2c8c4b54385d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c64ea09a-f403-435d-ac9a-2738aaf9be57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-152000\" primary control-plane node in \"json-output-152000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"226ac8f1-20fd-4f07-819f-9c048f0058fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"f6fbb53c-ec92-4510-bc90-9a452f96679e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-152000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"60520eca-0e3f-48f5-8c04-5690b97c4a21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"61dec2e5-5249-425d-ab6a-763cc751e48c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"7bc08850-9394-4863-9552-93d395bd02e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-152000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6c2166cb-7bad-4bed-83bf-9f8823eab5bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"595bdb28-3017-4254-a8b4-be5bccb6c306","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-152000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-152000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-152000 --output=json --user=testUser: exit status 83 (78.384292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a622ab3-eacd-4aa7-b89c-5fb1cdcad648","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-152000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"df3552c6-6afd-4013-b42a-5fd934924083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-152000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-152000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-152000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-152000 --output=json --user=testUser: exit status 83 (43.120334ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-152000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-152000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-152000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-152000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-845000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-845000 --driver=qemu2 : exit status 80 (9.851917s)

                                                
                                                
-- stdout --
	* [first-845000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-845000" primary control-plane node in "first-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-845000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-20 04:22:55.755353 -0700 PDT m=+463.255240626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-846000 -n second-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-846000 -n second-846000: exit status 85 (79.459416ms)

                                                
                                                
-- stdout --
	* Profile "second-846000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-846000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-846000" host is not running, skipping log retrieval (state="* Profile \"second-846000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-846000\"")
helpers_test.go:175: Cleaning up "second-846000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-846000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-20 04:22:56.066847 -0700 PDT m=+463.566737917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-845000 -n first-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-845000 -n first-845000: exit status 7 (28.795791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-845000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-845000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-845000
--- FAIL: TestMinikubeProfile (10.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-854000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-854000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.135713166s)

                                                
                                                
-- stdout --
	* [mount-start-1-854000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-854000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-854000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-854000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-854000 -n mount-start-1-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-854000 -n mount-start-1-854000: exit status 7 (67.951416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-182000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-182000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.924129792s)

                                                
                                                
-- stdout --
	* [multinode-182000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-182000" primary control-plane node in "multinode-182000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:23:06.743791   16111 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:23:06.743911   16111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:23:06.743914   16111 out.go:304] Setting ErrFile to fd 2...
	I0520 04:23:06.743923   16111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:23:06.744054   16111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:23:06.745070   16111 out.go:298] Setting JSON to false
	I0520 04:23:06.761048   16111 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8557,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:23:06.761127   16111 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:23:06.766208   16111 out.go:177] * [multinode-182000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:23:06.773094   16111 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:23:06.777261   16111 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:23:06.773146   16111 notify.go:220] Checking for updates...
	I0520 04:23:06.783142   16111 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:23:06.786181   16111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:23:06.789196   16111 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:23:06.792191   16111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:23:06.795410   16111 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:23:06.800060   16111 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:23:06.807136   16111 start.go:297] selected driver: qemu2
	I0520 04:23:06.807145   16111 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:23:06.807153   16111 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:23:06.809450   16111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:23:06.813027   16111 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:23:06.816252   16111 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:23:06.816276   16111 cni.go:84] Creating CNI manager for ""
	I0520 04:23:06.816289   16111 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 04:23:06.816293   16111 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:23:06.816330   16111 start.go:340] cluster config:
	{Name:multinode-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:23:06.820857   16111 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:23:06.828177   16111 out.go:177] * Starting "multinode-182000" primary control-plane node in "multinode-182000" cluster
	I0520 04:23:06.832105   16111 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:23:06.832123   16111 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:23:06.832137   16111 cache.go:56] Caching tarball of preloaded images
	I0520 04:23:06.832233   16111 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:23:06.832245   16111 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:23:06.832457   16111 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/multinode-182000/config.json ...
	I0520 04:23:06.832470   16111 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/multinode-182000/config.json: {Name:mk7ccbdc864d5ed5acc08a9a2ad816e2045a35af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:23:06.832691   16111 start.go:360] acquireMachinesLock for multinode-182000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:23:06.832729   16111 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "multinode-182000"
	I0520 04:23:06.832742   16111 start.go:93] Provisioning new machine with config: &{Name:multinode-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:23:06.832788   16111 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:23:06.841169   16111 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:23:06.859425   16111 start.go:159] libmachine.API.Create for "multinode-182000" (driver="qemu2")
	I0520 04:23:06.859455   16111 client.go:168] LocalClient.Create starting
	I0520 04:23:06.859515   16111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:23:06.859544   16111 main.go:141] libmachine: Decoding PEM data...
	I0520 04:23:06.859589   16111 main.go:141] libmachine: Parsing certificate...
	I0520 04:23:06.859635   16111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:23:06.859660   16111 main.go:141] libmachine: Decoding PEM data...
	I0520 04:23:06.859667   16111 main.go:141] libmachine: Parsing certificate...
	I0520 04:23:06.860075   16111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:23:06.987290   16111 main.go:141] libmachine: Creating SSH key...
	I0520 04:23:07.038804   16111 main.go:141] libmachine: Creating Disk image...
	I0520 04:23:07.038812   16111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:23:07.038981   16111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:23:07.051339   16111 main.go:141] libmachine: STDOUT: 
	I0520 04:23:07.051358   16111 main.go:141] libmachine: STDERR: 
	I0520 04:23:07.051416   16111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2 +20000M
	I0520 04:23:07.062335   16111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:23:07.062360   16111 main.go:141] libmachine: STDERR: 
	I0520 04:23:07.062371   16111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:23:07.062375   16111 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:23:07.062411   16111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:3b:10:02:de:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:23:07.064084   16111 main.go:141] libmachine: STDOUT: 
	I0520 04:23:07.064107   16111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:23:07.064124   16111 client.go:171] duration metric: took 204.66575ms to LocalClient.Create
	I0520 04:23:09.066395   16111 start.go:128] duration metric: took 2.233557084s to createHost
	I0520 04:23:09.066530   16111 start.go:83] releasing machines lock for "multinode-182000", held for 2.233778458s
	W0520 04:23:09.066577   16111 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:23:09.073756   16111 out.go:177] * Deleting "multinode-182000" in qemu2 ...
	W0520 04:23:09.099948   16111 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:23:09.099978   16111 start.go:728] Will try again in 5 seconds ...
	I0520 04:23:14.102096   16111 start.go:360] acquireMachinesLock for multinode-182000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:23:14.102581   16111 start.go:364] duration metric: took 371.959µs to acquireMachinesLock for "multinode-182000"
	I0520 04:23:14.102733   16111 start.go:93] Provisioning new machine with config: &{Name:multinode-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:23:14.103035   16111 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:23:14.112751   16111 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:23:14.161940   16111 start.go:159] libmachine.API.Create for "multinode-182000" (driver="qemu2")
	I0520 04:23:14.161991   16111 client.go:168] LocalClient.Create starting
	I0520 04:23:14.162111   16111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:23:14.162180   16111 main.go:141] libmachine: Decoding PEM data...
	I0520 04:23:14.162202   16111 main.go:141] libmachine: Parsing certificate...
	I0520 04:23:14.162262   16111 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:23:14.162305   16111 main.go:141] libmachine: Decoding PEM data...
	I0520 04:23:14.162316   16111 main.go:141] libmachine: Parsing certificate...
	I0520 04:23:14.162915   16111 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:23:14.301988   16111 main.go:141] libmachine: Creating SSH key...
	I0520 04:23:14.566524   16111 main.go:141] libmachine: Creating Disk image...
	I0520 04:23:14.566533   16111 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:23:14.566787   16111 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:23:14.580246   16111 main.go:141] libmachine: STDOUT: 
	I0520 04:23:14.580267   16111 main.go:141] libmachine: STDERR: 
	I0520 04:23:14.580327   16111 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2 +20000M
	I0520 04:23:14.591553   16111 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:23:14.591567   16111 main.go:141] libmachine: STDERR: 
	I0520 04:23:14.591580   16111 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:23:14.591584   16111 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:23:14.591626   16111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:66:3c:eb:0b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:23:14.593312   16111 main.go:141] libmachine: STDOUT: 
	I0520 04:23:14.593328   16111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:23:14.593342   16111 client.go:171] duration metric: took 431.35ms to LocalClient.Create
	I0520 04:23:16.595491   16111 start.go:128] duration metric: took 2.492453792s to createHost
	I0520 04:23:16.595596   16111 start.go:83] releasing machines lock for "multinode-182000", held for 2.492988166s
	W0520 04:23:16.596020   16111 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:23:16.608780   16111 out.go:177] 
	W0520 04:23:16.612758   16111 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:23:16.612796   16111 out.go:239] * 
	* 
	W0520 04:23:16.623297   16111 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:23:16.627688   16111 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-182000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (51.661584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (56.220459ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-182000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- rollout status deployment/busybox: exit status 1 (55.217ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.273042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.552833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.991084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.11075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.133ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.487209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.571167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.453083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.559875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.807833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.748458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.099167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.698958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.297833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (28.954375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (82.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-182000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.85075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (29.282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-182000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-182000 -v 3 --alsologtostderr: exit status 83 (38.717125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-182000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:38.804245   16215 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:38.804408   16215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:38.804411   16215 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:38.804414   16215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:38.804533   16215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:38.804779   16215 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:38.804955   16215 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:38.806956   16215 out.go:177] * The control-plane node multinode-182000 host is not running: state=Stopped
	I0520 04:24:38.811196   16215 out.go:177]   To start a cluster, run: "minikube start -p multinode-182000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-182000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (29.105916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-182000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-182000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-182000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-182000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-182000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (29.623334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-182000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-182000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-182000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"multinode-182000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (28.786459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status --output json --alsologtostderr: exit status 7 (29.485542ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-182000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:39.026326   16228 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:39.026467   16228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.026471   16228 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:39.026473   16228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.026607   16228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:39.026728   16228 out.go:298] Setting JSON to true
	I0520 04:24:39.026739   16228 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:39.026803   16228 notify.go:220] Checking for updates...
	I0520 04:24:39.026939   16228 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:39.026945   16228 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:39.027137   16228 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:39.027141   16228 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:39.027143   16228 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-182000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (29.471292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 node stop m03: exit status 85 (45.237375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-182000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status: exit status 7 (29.161959ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr: exit status 7 (28.93425ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:39.160113   16236 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:39.160239   16236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.160242   16236 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:39.160244   16236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.160366   16236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:39.160479   16236 out.go:298] Setting JSON to false
	I0520 04:24:39.160488   16236 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:39.160544   16236 notify.go:220] Checking for updates...
	I0520 04:24:39.160682   16236 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:39.160693   16236 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:39.160898   16236 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:39.160903   16236 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:39.160905   16236 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr": multinode-182000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (29.144458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.001959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:39.218738   16240 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:39.218899   16240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.218902   16240 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:39.218904   16240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.219026   16240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:39.219257   16240 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:39.219430   16240 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:39.223594   16240 out.go:177] 
	W0520 04:24:39.227513   16240 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0520 04:24:39.227519   16240 out.go:239] * 
	* 
	W0520 04:24:39.229720   16240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:24:39.233614   16240 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0520 04:24:39.218738   16240 out.go:291] Setting OutFile to fd 1 ...
I0520 04:24:39.218899   16240 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:24:39.218902   16240 out.go:304] Setting ErrFile to fd 2...
I0520 04:24:39.218904   16240 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 04:24:39.219026   16240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
I0520 04:24:39.219257   16240 mustload.go:65] Loading cluster: multinode-182000
I0520 04:24:39.219430   16240 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 04:24:39.223594   16240 out.go:177] 
W0520 04:24:39.227513   16240 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0520 04:24:39.227519   16240 out.go:239] * 
* 
W0520 04:24:39.229720   16240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 04:24:39.233614   16240 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-182000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (29.638667ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:39.265399   16242 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:39.265547   16242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.265550   16242 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:39.265552   16242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:39.265663   16242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:39.265779   16242 out.go:298] Setting JSON to false
	I0520 04:24:39.265788   16242 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:39.265850   16242 notify.go:220] Checking for updates...
	I0520 04:24:39.265985   16242 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:39.265992   16242 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:39.266186   16242 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:39.266190   16242 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:39.266192   16242 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (72.40475ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:40.783784   16248 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:40.783997   16248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:40.784001   16248 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:40.784005   16248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:40.784163   16248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:40.784338   16248 out.go:298] Setting JSON to false
	I0520 04:24:40.784349   16248 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:40.784388   16248 notify.go:220] Checking for updates...
	I0520 04:24:40.784614   16248 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:40.784622   16248 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:40.784918   16248 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:40.784923   16248 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:40.784925   16248 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (74.100375ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:42.570820   16250 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:42.571013   16250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:42.571017   16250 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:42.571020   16250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:42.571191   16250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:42.571330   16250 out.go:298] Setting JSON to false
	I0520 04:24:42.571341   16250 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:42.571384   16250 notify.go:220] Checking for updates...
	I0520 04:24:42.571626   16250 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:42.571634   16250 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:42.571936   16250 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:42.571941   16250 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:42.571944   16250 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (75.419833ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:45.014523   16254 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:45.014755   16254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:45.014760   16254 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:45.014763   16254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:45.014950   16254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:45.015124   16254 out.go:298] Setting JSON to false
	I0520 04:24:45.015137   16254 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:45.015191   16254 notify.go:220] Checking for updates...
	I0520 04:24:45.015444   16254 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:45.015455   16254 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:45.015755   16254 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:45.015760   16254 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:45.015763   16254 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (72.857583ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:48.022503   16258 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:48.022688   16258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:48.022692   16258 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:48.022695   16258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:48.022903   16258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:48.023059   16258 out.go:298] Setting JSON to false
	I0520 04:24:48.023070   16258 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:48.023114   16258 notify.go:220] Checking for updates...
	I0520 04:24:48.023325   16258 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:48.023333   16258 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:48.023630   16258 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:48.023635   16258 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:48.023638   16258 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (73.099542ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:24:50.836143   16263 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:24:50.836352   16263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:50.836356   16263 out.go:304] Setting ErrFile to fd 2...
	I0520 04:24:50.836360   16263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:24:50.836542   16263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:24:50.836709   16263 out.go:298] Setting JSON to false
	I0520 04:24:50.836720   16263 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:24:50.836758   16263 notify.go:220] Checking for updates...
	I0520 04:24:50.836980   16263 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:24:50.836988   16263 status.go:255] checking status of multinode-182000 ...
	I0520 04:24:50.837258   16263 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:24:50.837263   16263 status.go:343] host is not running, skipping remaining checks
	I0520 04:24:50.837266   16263 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (72.791166ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:25:02.253074   16277 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:25:02.253293   16277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:02.253297   16277 out.go:304] Setting ErrFile to fd 2...
	I0520 04:25:02.253300   16277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:02.253469   16277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:25:02.253632   16277 out.go:298] Setting JSON to false
	I0520 04:25:02.253644   16277 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:25:02.253689   16277 notify.go:220] Checking for updates...
	I0520 04:25:02.253931   16277 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:25:02.253940   16277 status.go:255] checking status of multinode-182000 ...
	I0520 04:25:02.254222   16277 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:25:02.254227   16277 status.go:343] host is not running, skipping remaining checks
	I0520 04:25:02.254230   16277 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (74.242334ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:25:12.094844   16283 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:25:12.095087   16283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:12.095092   16283 out.go:304] Setting ErrFile to fd 2...
	I0520 04:25:12.095095   16283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:12.095273   16283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:25:12.095437   16283 out.go:298] Setting JSON to false
	I0520 04:25:12.095449   16283 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:25:12.095493   16283 notify.go:220] Checking for updates...
	I0520 04:25:12.095746   16283 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:25:12.095755   16283 status.go:255] checking status of multinode-182000 ...
	I0520 04:25:12.096040   16283 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:25:12.096046   16283 status.go:343] host is not running, skipping remaining checks
	I0520 04:25:12.096049   16283 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr: exit status 7 (74.519542ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:25:31.069251   16288 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:25:31.069455   16288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:31.069459   16288 out.go:304] Setting ErrFile to fd 2...
	I0520 04:25:31.069462   16288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:31.069644   16288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:25:31.069790   16288 out.go:298] Setting JSON to false
	I0520 04:25:31.069802   16288 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:25:31.069838   16288 notify.go:220] Checking for updates...
	I0520 04:25:31.070068   16288 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:25:31.070077   16288 status.go:255] checking status of multinode-182000 ...
	I0520 04:25:31.070361   16288 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:25:31.070367   16288 status.go:343] host is not running, skipping remaining checks
	I0520 04:25:31.070370   16288 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-182000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (33.009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-182000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-182000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-182000: (2.134722958s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-182000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-182000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22341625s)

                                                
                                                
-- stdout --
	* [multinode-182000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-182000" primary control-plane node in "multinode-182000" cluster
	* Restarting existing qemu2 VM for "multinode-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:25:33.330293   16309 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:25:33.330481   16309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:33.330485   16309 out.go:304] Setting ErrFile to fd 2...
	I0520 04:25:33.330489   16309 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:33.330650   16309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:25:33.331830   16309 out.go:298] Setting JSON to false
	I0520 04:25:33.351178   16309 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8704,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:25:33.351254   16309 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:25:33.354944   16309 out.go:177] * [multinode-182000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:25:33.362984   16309 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:25:33.363076   16309 notify.go:220] Checking for updates...
	I0520 04:25:33.369941   16309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:25:33.373020   16309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:25:33.375947   16309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:25:33.378953   16309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:25:33.385832   16309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:25:33.389234   16309 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:25:33.389292   16309 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:25:33.392985   16309 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:25:33.400896   16309 start.go:297] selected driver: qemu2
	I0520 04:25:33.400902   16309 start.go:901] validating driver "qemu2" against &{Name:multinode-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:25:33.400949   16309 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:25:33.403438   16309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:25:33.403468   16309 cni.go:84] Creating CNI manager for ""
	I0520 04:25:33.403473   16309 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:25:33.403529   16309 start.go:340] cluster config:
	{Name:multinode-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-182000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:25:33.408190   16309 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:25:33.414920   16309 out.go:177] * Starting "multinode-182000" primary control-plane node in "multinode-182000" cluster
	I0520 04:25:33.418981   16309 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:25:33.419000   16309 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:25:33.419015   16309 cache.go:56] Caching tarball of preloaded images
	I0520 04:25:33.419080   16309 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:25:33.419086   16309 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:25:33.419153   16309 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/multinode-182000/config.json ...
	I0520 04:25:33.419566   16309 start.go:360] acquireMachinesLock for multinode-182000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:25:33.419603   16309 start.go:364] duration metric: took 30.833µs to acquireMachinesLock for "multinode-182000"
	I0520 04:25:33.419614   16309 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:25:33.419619   16309 fix.go:54] fixHost starting: 
	I0520 04:25:33.419759   16309 fix.go:112] recreateIfNeeded on multinode-182000: state=Stopped err=<nil>
	W0520 04:25:33.419770   16309 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:25:33.427912   16309 out.go:177] * Restarting existing qemu2 VM for "multinode-182000" ...
	I0520 04:25:33.432021   16309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:66:3c:eb:0b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:25:33.434240   16309 main.go:141] libmachine: STDOUT: 
	I0520 04:25:33.434264   16309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:25:33.434295   16309 fix.go:56] duration metric: took 14.676125ms for fixHost
	I0520 04:25:33.434299   16309 start.go:83] releasing machines lock for "multinode-182000", held for 14.691375ms
	W0520 04:25:33.434307   16309 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:25:33.434345   16309 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:25:33.434351   16309 start.go:728] Will try again in 5 seconds ...
	I0520 04:25:38.436467   16309 start.go:360] acquireMachinesLock for multinode-182000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:25:38.436866   16309 start.go:364] duration metric: took 316.375µs to acquireMachinesLock for "multinode-182000"
	I0520 04:25:38.436979   16309 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:25:38.436999   16309 fix.go:54] fixHost starting: 
	I0520 04:25:38.437751   16309 fix.go:112] recreateIfNeeded on multinode-182000: state=Stopped err=<nil>
	W0520 04:25:38.437778   16309 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:25:38.444225   16309 out.go:177] * Restarting existing qemu2 VM for "multinode-182000" ...
	I0520 04:25:38.448385   16309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:66:3c:eb:0b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:25:38.457205   16309 main.go:141] libmachine: STDOUT: 
	I0520 04:25:38.457271   16309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:25:38.457328   16309 fix.go:56] duration metric: took 20.33225ms for fixHost
	I0520 04:25:38.457345   16309 start.go:83] releasing machines lock for "multinode-182000", held for 20.459375ms
	W0520 04:25:38.457542   16309 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:25:38.464237   16309 out.go:177] 
	W0520 04:25:38.468308   16309 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:25:38.468369   16309 out.go:239] * 
	* 
	W0520 04:25:38.471203   16309 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:25:38.479107   16309 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-182000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-182000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (32.123292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 node delete m03: exit status 83 (41.212625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-182000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-182000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr: exit status 7 (28.951875ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:25:38.662239   16323 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:25:38.662408   16323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:38.662411   16323 out.go:304] Setting ErrFile to fd 2...
	I0520 04:25:38.662413   16323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:38.662546   16323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:25:38.662671   16323 out.go:298] Setting JSON to false
	I0520 04:25:38.662680   16323 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:25:38.662745   16323 notify.go:220] Checking for updates...
	I0520 04:25:38.662889   16323 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:25:38.662896   16323 status.go:255] checking status of multinode-182000 ...
	I0520 04:25:38.663091   16323 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:25:38.663094   16323 status.go:343] host is not running, skipping remaining checks
	I0520 04:25:38.663096   16323 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (29.243291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-182000 stop: (3.498161s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status: exit status 7 (63.146416ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr: exit status 7 (31.544333ms)

                                                
                                                
-- stdout --
	multinode-182000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:25:42.284974   16349 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:25:42.285129   16349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:42.285132   16349 out.go:304] Setting ErrFile to fd 2...
	I0520 04:25:42.285134   16349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:42.285256   16349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:25:42.285359   16349 out.go:298] Setting JSON to false
	I0520 04:25:42.285368   16349 mustload.go:65] Loading cluster: multinode-182000
	I0520 04:25:42.285429   16349 notify.go:220] Checking for updates...
	I0520 04:25:42.285579   16349 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:25:42.285589   16349 status.go:255] checking status of multinode-182000 ...
	I0520 04:25:42.285797   16349 status.go:330] multinode-182000 host status = "Stopped" (err=<nil>)
	I0520 04:25:42.285801   16349 status.go:343] host is not running, skipping remaining checks
	I0520 04:25:42.285804   16349 status.go:257] multinode-182000 status: &{Name:multinode-182000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr": multinode-182000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-182000 status --alsologtostderr": multinode-182000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (29.034542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-182000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-182000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178657584s)

                                                
                                                
-- stdout --
	* [multinode-182000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-182000" primary control-plane node in "multinode-182000" cluster
	* Restarting existing qemu2 VM for "multinode-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:25:42.343251   16353 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:25:42.343380   16353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:42.343383   16353 out.go:304] Setting ErrFile to fd 2...
	I0520 04:25:42.343386   16353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:25:42.343517   16353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:25:42.344545   16353 out.go:298] Setting JSON to false
	I0520 04:25:42.360497   16353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8713,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:25:42.360585   16353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:25:42.365914   16353 out.go:177] * [multinode-182000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:25:42.373857   16353 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:25:42.377892   16353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:25:42.373908   16353 notify.go:220] Checking for updates...
	I0520 04:25:42.381842   16353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:25:42.384920   16353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:25:42.387897   16353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:25:42.390848   16353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:25:42.394179   16353 config.go:182] Loaded profile config "multinode-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:25:42.394424   16353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:25:42.398898   16353 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:25:42.405804   16353 start.go:297] selected driver: qemu2
	I0520 04:25:42.405811   16353 start.go:901] validating driver "qemu2" against &{Name:multinode-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:25:42.405861   16353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:25:42.408116   16353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:25:42.408138   16353 cni.go:84] Creating CNI manager for ""
	I0520 04:25:42.408143   16353 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 04:25:42.408197   16353 start.go:340] cluster config:
	{Name:multinode-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-182000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:25:42.412575   16353 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:25:42.419800   16353 out.go:177] * Starting "multinode-182000" primary control-plane node in "multinode-182000" cluster
	I0520 04:25:42.423830   16353 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:25:42.423852   16353 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:25:42.423864   16353 cache.go:56] Caching tarball of preloaded images
	I0520 04:25:42.423922   16353 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:25:42.423927   16353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:25:42.424002   16353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/multinode-182000/config.json ...
	I0520 04:25:42.424316   16353 start.go:360] acquireMachinesLock for multinode-182000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:25:42.424344   16353 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "multinode-182000"
	I0520 04:25:42.424354   16353 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:25:42.424361   16353 fix.go:54] fixHost starting: 
	I0520 04:25:42.424477   16353 fix.go:112] recreateIfNeeded on multinode-182000: state=Stopped err=<nil>
	W0520 04:25:42.424486   16353 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:25:42.427814   16353 out.go:177] * Restarting existing qemu2 VM for "multinode-182000" ...
	I0520 04:25:42.435719   16353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:66:3c:eb:0b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:25:42.437690   16353 main.go:141] libmachine: STDOUT: 
	I0520 04:25:42.437708   16353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:25:42.437734   16353 fix.go:56] duration metric: took 13.373833ms for fixHost
	I0520 04:25:42.437743   16353 start.go:83] releasing machines lock for "multinode-182000", held for 13.391791ms
	W0520 04:25:42.437748   16353 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:25:42.437775   16353 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:25:42.437780   16353 start.go:728] Will try again in 5 seconds ...
	I0520 04:25:47.439844   16353 start.go:360] acquireMachinesLock for multinode-182000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:25:47.440210   16353 start.go:364] duration metric: took 298.541µs to acquireMachinesLock for "multinode-182000"
	I0520 04:25:47.440352   16353 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:25:47.440376   16353 fix.go:54] fixHost starting: 
	I0520 04:25:47.441026   16353 fix.go:112] recreateIfNeeded on multinode-182000: state=Stopped err=<nil>
	W0520 04:25:47.441052   16353 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:25:47.449419   16353 out.go:177] * Restarting existing qemu2 VM for "multinode-182000" ...
	I0520 04:25:47.453457   16353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:66:3c:eb:0b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/multinode-182000/disk.qcow2
	I0520 04:25:47.462228   16353 main.go:141] libmachine: STDOUT: 
	I0520 04:25:47.462294   16353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:25:47.462405   16353 fix.go:56] duration metric: took 22.034125ms for fixHost
	I0520 04:25:47.462423   16353 start.go:83] releasing machines lock for "multinode-182000", held for 22.171041ms
	W0520 04:25:47.462576   16353 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:25:47.469484   16353 out.go:177] 
	W0520 04:25:47.473517   16353 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:25:47.473543   16353 out.go:239] * 
	* 
	W0520 04:25:47.476388   16353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:25:47.481402   16353 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-182000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (68.608667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-182000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-182000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-182000-m01 --driver=qemu2 : exit status 80 (10.068753375s)

                                                
                                                
-- stdout --
	* [multinode-182000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-182000-m01" primary control-plane node in "multinode-182000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-182000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-182000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-182000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-182000-m02 --driver=qemu2 : exit status 80 (10.223684291s)

                                                
                                                
-- stdout --
	* [multinode-182000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-182000-m02" primary control-plane node in "multinode-182000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-182000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-182000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-182000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-182000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-182000: exit status 83 (84.268083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-182000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-182000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-182000 -n multinode-182000: exit status 7 (30.248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.54s)

                                                
                                    
x
+
TestPreload (9.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-755000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-755000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.769426291s)

                                                
                                                
-- stdout --
	* [test-preload-755000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-755000" primary control-plane node in "test-preload-755000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-755000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:26:08.264451   16407 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:26:08.264579   16407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:08.264582   16407 out.go:304] Setting ErrFile to fd 2...
	I0520 04:26:08.264584   16407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:26:08.264738   16407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:26:08.265787   16407 out.go:298] Setting JSON to false
	I0520 04:26:08.281782   16407 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8739,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:26:08.281839   16407 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:26:08.286826   16407 out.go:177] * [test-preload-755000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:26:08.292690   16407 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:26:08.292768   16407 notify.go:220] Checking for updates...
	I0520 04:26:08.299566   16407 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:26:08.302710   16407 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:26:08.305737   16407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:26:08.307084   16407 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:26:08.309673   16407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:26:08.313044   16407 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:26:08.313109   16407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:26:08.317537   16407 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:26:08.324667   16407 start.go:297] selected driver: qemu2
	I0520 04:26:08.324675   16407 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:26:08.324682   16407 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:26:08.326890   16407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:26:08.329734   16407 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:26:08.332894   16407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:26:08.332916   16407 cni.go:84] Creating CNI manager for ""
	I0520 04:26:08.332925   16407 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:26:08.332932   16407 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:26:08.332969   16407 start.go:340] cluster config:
	{Name:test-preload-755000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:26:08.337737   16407 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.343610   16407 out.go:177] * Starting "test-preload-755000" primary control-plane node in "test-preload-755000" cluster
	I0520 04:26:08.347651   16407 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0520 04:26:08.347727   16407 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/test-preload-755000/config.json ...
	I0520 04:26:08.347747   16407 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/test-preload-755000/config.json: {Name:mkf6f14fd4cc02abc9e65bf628f5c04bcedf5c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:26:08.347757   16407 cache.go:107] acquiring lock: {Name:mkcaab2a68a35b8acb94ecacdb51dcdce2308ba2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.347766   16407 cache.go:107] acquiring lock: {Name:mk444a7ecc9a22caf1d26a46ca1e133e693a2457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.347770   16407 cache.go:107] acquiring lock: {Name:mkd128dec703fd1368a7d715669114326744c179 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.347794   16407 cache.go:107] acquiring lock: {Name:mkc5c59912a431d03d2e3a1d73de841a8df59a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.347795   16407 cache.go:107] acquiring lock: {Name:mkc2ec02a66b7ced6c2f44a36d9cf9c78db88a7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.348007   16407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:26:08.348028   16407 cache.go:107] acquiring lock: {Name:mk1d37fc87fa2e22492cf90b5bef1fe7f34e3646 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.348042   16407 cache.go:107] acquiring lock: {Name:mk385c8c1d0b6c64906e47f23e92287e3234c955 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.348117   16407 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:26:08.348136   16407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 04:26:08.348148   16407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:26:08.348140   16407 start.go:360] acquireMachinesLock for test-preload-755000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:26:08.348186   16407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 04:26:08.348206   16407 start.go:364] duration metric: took 34.042µs to acquireMachinesLock for "test-preload-755000"
	I0520 04:26:08.348203   16407 cache.go:107] acquiring lock: {Name:mk00af23ff7924ed35cbc717f2c842bfe95d63e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:26:08.348219   16407 start.go:93] Provisioning new machine with config: &{Name:test-preload-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:26:08.348247   16407 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:26:08.355733   16407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:26:08.348275   16407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:26:08.348371   16407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 04:26:08.348382   16407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 04:26:08.361273   16407 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:26:08.361332   16407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 04:26:08.361991   16407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 04:26:08.361992   16407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:26:08.362056   16407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:26:08.364307   16407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 04:26:08.364343   16407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:26:08.364373   16407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 04:26:08.373352   16407 start.go:159] libmachine.API.Create for "test-preload-755000" (driver="qemu2")
	I0520 04:26:08.373373   16407 client.go:168] LocalClient.Create starting
	I0520 04:26:08.373437   16407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:26:08.373469   16407 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:08.373478   16407 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:08.373522   16407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:26:08.373544   16407 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:08.373553   16407 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:08.373912   16407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:26:08.508621   16407 main.go:141] libmachine: Creating SSH key...
	I0520 04:26:08.555592   16407 main.go:141] libmachine: Creating Disk image...
	I0520 04:26:08.555613   16407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:26:08.555817   16407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2
	I0520 04:26:08.569554   16407 main.go:141] libmachine: STDOUT: 
	I0520 04:26:08.569573   16407 main.go:141] libmachine: STDERR: 
	I0520 04:26:08.569634   16407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2 +20000M
	I0520 04:26:08.582366   16407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:26:08.582384   16407 main.go:141] libmachine: STDERR: 
	I0520 04:26:08.582413   16407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2
	I0520 04:26:08.582416   16407 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:26:08.582447   16407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:1f:50:93:2e:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2
	I0520 04:26:08.584583   16407 main.go:141] libmachine: STDOUT: 
	I0520 04:26:08.584628   16407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:26:08.584645   16407 client.go:171] duration metric: took 211.270833ms to LocalClient.Create
	I0520 04:26:08.735557   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0520 04:26:08.765008   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:26:08.766142   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0520 04:26:08.806189   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:26:08.813960   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0520 04:26:08.844142   16407 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:26:08.844174   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 04:26:08.861379   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0520 04:26:08.898339   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0520 04:26:08.898389   16407 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 550.597875ms
	I0520 04:26:08.898412   16407 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0520 04:26:09.209085   16407 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:26:09.209200   16407 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:26:09.478940   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 04:26:09.479014   16407 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.131257875s
	I0520 04:26:09.479043   16407 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 04:26:10.584887   16407 start.go:128] duration metric: took 2.23664475s to createHost
	I0520 04:26:10.584951   16407 start.go:83] releasing machines lock for "test-preload-755000", held for 2.236761167s
	W0520 04:26:10.584996   16407 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:10.596848   16407 out.go:177] * Deleting "test-preload-755000" in qemu2 ...
	W0520 04:26:10.622464   16407 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:10.622483   16407 start.go:728] Will try again in 5 seconds ...
	I0520 04:26:10.892014   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0520 04:26:10.892057   16407 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.544062041s
	I0520 04:26:10.892088   16407 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0520 04:26:11.126295   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0520 04:26:11.126343   16407 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.778368333s
	I0520 04:26:11.126389   16407 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0520 04:26:13.124058   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0520 04:26:13.124126   16407 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.77642425s
	I0520 04:26:13.124152   16407 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0520 04:26:13.550833   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0520 04:26:13.550880   16407 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.203173291s
	I0520 04:26:13.550906   16407 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0520 04:26:14.730796   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0520 04:26:14.730845   16407 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.382718875s
	I0520 04:26:14.730873   16407 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0520 04:26:15.325521   16407 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0520 04:26:15.325585   16407 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 6.977869542s
	I0520 04:26:15.325611   16407 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0520 04:26:15.325641   16407 cache.go:87] Successfully saved all images to host disk.
	I0520 04:26:15.624621   16407 start.go:360] acquireMachinesLock for test-preload-755000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:26:15.625033   16407 start.go:364] duration metric: took 353.334µs to acquireMachinesLock for "test-preload-755000"
	I0520 04:26:15.625140   16407 start.go:93] Provisioning new machine with config: &{Name:test-preload-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:26:15.625423   16407 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:26:15.632030   16407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:26:15.682252   16407 start.go:159] libmachine.API.Create for "test-preload-755000" (driver="qemu2")
	I0520 04:26:15.682326   16407 client.go:168] LocalClient.Create starting
	I0520 04:26:15.682445   16407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:26:15.682515   16407 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:15.682533   16407 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:15.682601   16407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:26:15.682643   16407 main.go:141] libmachine: Decoding PEM data...
	I0520 04:26:15.682658   16407 main.go:141] libmachine: Parsing certificate...
	I0520 04:26:15.683172   16407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:26:15.827217   16407 main.go:141] libmachine: Creating SSH key...
	I0520 04:26:15.932150   16407 main.go:141] libmachine: Creating Disk image...
	I0520 04:26:15.932156   16407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:26:15.932340   16407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2
	I0520 04:26:15.944865   16407 main.go:141] libmachine: STDOUT: 
	I0520 04:26:15.944888   16407 main.go:141] libmachine: STDERR: 
	I0520 04:26:15.944954   16407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2 +20000M
	I0520 04:26:15.955839   16407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:26:15.955875   16407 main.go:141] libmachine: STDERR: 
	I0520 04:26:15.955892   16407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2
	I0520 04:26:15.955903   16407 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:26:15.955950   16407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:25:2f:0e:ad:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/test-preload-755000/disk.qcow2
	I0520 04:26:15.957760   16407 main.go:141] libmachine: STDOUT: 
	I0520 04:26:15.957782   16407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:26:15.957796   16407 client.go:171] duration metric: took 275.467ms to LocalClient.Create
	I0520 04:26:17.959123   16407 start.go:128] duration metric: took 2.333671875s to createHost
	I0520 04:26:17.959204   16407 start.go:83] releasing machines lock for "test-preload-755000", held for 2.334178s
	W0520 04:26:17.959585   16407 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-755000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-755000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:26:17.972026   16407 out.go:177] 
	W0520 04:26:17.977262   16407 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:26:17.977312   16407 out.go:239] * 
	* 
	W0520 04:26:17.980044   16407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:26:17.991107   16407 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-755000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-05-20 04:26:18.009818 -0700 PDT m=+665.512136292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-755000 -n test-preload-755000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-755000 -n test-preload-755000: exit status 7 (64.914416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-755000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-755000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-755000
--- FAIL: TestPreload (9.94s)

                                                
                                    
x
+
TestScheduledStopUnix (10.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-922000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-922000 --memory=2048 --driver=qemu2 : exit status 80 (10.034538291s)

                                                
                                                
-- stdout --
	* [scheduled-stop-922000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-922000" primary control-plane node in "scheduled-stop-922000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-922000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-922000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-922000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-922000" primary control-plane node in "scheduled-stop-922000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-922000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-922000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-05-20 04:26:28.209028 -0700 PDT m=+675.711468876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-922000 -n scheduled-stop-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-922000 -n scheduled-stop-922000: exit status 7 (68.816375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-922000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-922000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-922000
--- FAIL: TestScheduledStopUnix (10.21s)

                                                
                                    
x
+
TestSkaffold (12.38s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4124165688 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-821000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-821000 --memory=2600 --driver=qemu2 : exit status 80 (9.921062042s)

                                                
                                                
-- stdout --
	* [skaffold-821000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-821000" primary control-plane node in "skaffold-821000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-821000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-821000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-821000" primary control-plane node in "skaffold-821000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-821000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-05-20 04:26:40.599047 -0700 PDT m=+688.101636209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-821000 -n skaffold-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-821000 -n skaffold-821000: exit status 7 (63.016875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-821000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-821000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-821000
--- FAIL: TestSkaffold (12.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (601.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.328908288 start -p running-upgrade-901000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.328908288 start -p running-upgrade-901000 --memory=2200 --vm-driver=qemu2 : (51.368750875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-901000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-901000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m35.025934125s)

                                                
                                                
-- stdout --
	* [running-upgrade-901000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-901000" primary control-plane node in "running-upgrade-901000" cluster
	* Updating the running qemu2 "running-upgrade-901000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:28:14.954272   16800 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:28:14.954476   16800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:28:14.954480   16800 out.go:304] Setting ErrFile to fd 2...
	I0520 04:28:14.954482   16800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:28:14.954617   16800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:28:14.955632   16800 out.go:298] Setting JSON to false
	I0520 04:28:14.972828   16800 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8865,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:28:14.972900   16800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:28:14.978316   16800 out.go:177] * [running-upgrade-901000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:28:14.989376   16800 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:28:14.986347   16800 notify.go:220] Checking for updates...
	I0520 04:28:14.995278   16800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:28:14.998438   16800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:28:15.001431   16800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:28:15.002758   16800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:28:15.005411   16800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:28:15.008735   16800 config.go:182] Loaded profile config "running-upgrade-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:28:15.012358   16800 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 04:28:15.015435   16800 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:28:15.016924   16800 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:28:15.023400   16800 start.go:297] selected driver: qemu2
	I0520 04:28:15.023406   16800 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53009 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-901000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:28:15.023454   16800 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:28:15.025958   16800 cni.go:84] Creating CNI manager for ""
	I0520 04:28:15.025975   16800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:28:15.026001   16800 start.go:340] cluster config:
	{Name:running-upgrade-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53009 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-901000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:28:15.026050   16800 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:28:15.033398   16800 out.go:177] * Starting "running-upgrade-901000" primary control-plane node in "running-upgrade-901000" cluster
	I0520 04:28:15.037368   16800 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:28:15.037386   16800 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 04:28:15.037393   16800 cache.go:56] Caching tarball of preloaded images
	I0520 04:28:15.037447   16800 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:28:15.037452   16800 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 04:28:15.037505   16800 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/config.json ...
	I0520 04:28:15.037899   16800 start.go:360] acquireMachinesLock for running-upgrade-901000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:28:15.037928   16800 start.go:364] duration metric: took 22.417µs to acquireMachinesLock for "running-upgrade-901000"
	I0520 04:28:15.037937   16800 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:28:15.037942   16800 fix.go:54] fixHost starting: 
	I0520 04:28:15.038604   16800 fix.go:112] recreateIfNeeded on running-upgrade-901000: state=Running err=<nil>
	W0520 04:28:15.038613   16800 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:28:15.045420   16800 out.go:177] * Updating the running qemu2 "running-upgrade-901000" VM ...
	I0520 04:28:15.049288   16800 machine.go:94] provisionDockerMachine start ...
	I0520 04:28:15.049323   16800 main.go:141] libmachine: Using SSH client type: native
	I0520 04:28:15.049424   16800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10479e900] 0x1047a1160 <nil>  [] 0s} localhost 52977 <nil> <nil>}
	I0520 04:28:15.049429   16800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:28:15.115727   16800 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-901000
	
	I0520 04:28:15.115738   16800 buildroot.go:166] provisioning hostname "running-upgrade-901000"
	I0520 04:28:15.115798   16800 main.go:141] libmachine: Using SSH client type: native
	I0520 04:28:15.115923   16800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10479e900] 0x1047a1160 <nil>  [] 0s} localhost 52977 <nil> <nil>}
	I0520 04:28:15.115931   16800 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-901000 && echo "running-upgrade-901000" | sudo tee /etc/hostname
	I0520 04:28:15.188170   16800 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-901000
	
	I0520 04:28:15.188221   16800 main.go:141] libmachine: Using SSH client type: native
	I0520 04:28:15.188332   16800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10479e900] 0x1047a1160 <nil>  [] 0s} localhost 52977 <nil> <nil>}
	I0520 04:28:15.188340   16800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-901000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-901000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-901000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:28:15.253060   16800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:28:15.253074   16800 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18932-14402/.minikube CaCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18932-14402/.minikube}
	I0520 04:28:15.253088   16800 buildroot.go:174] setting up certificates
	I0520 04:28:15.253093   16800 provision.go:84] configureAuth start
	I0520 04:28:15.253099   16800 provision.go:143] copyHostCerts
	I0520 04:28:15.253158   16800 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem, removing ...
	I0520 04:28:15.253164   16800 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem
	I0520 04:28:15.253280   16800 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem (1078 bytes)
	I0520 04:28:15.253461   16800 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem, removing ...
	I0520 04:28:15.253464   16800 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem
	I0520 04:28:15.253511   16800 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem (1123 bytes)
	I0520 04:28:15.253610   16800 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem, removing ...
	I0520 04:28:15.253613   16800 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem
	I0520 04:28:15.253658   16800 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem (1679 bytes)
	I0520 04:28:15.253751   16800 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-901000 san=[127.0.0.1 localhost minikube running-upgrade-901000]
	I0520 04:28:15.360664   16800 provision.go:177] copyRemoteCerts
	I0520 04:28:15.360713   16800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:28:15.360721   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	I0520 04:28:15.394861   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 04:28:15.401716   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 04:28:15.408228   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 04:28:15.415225   16800 provision.go:87] duration metric: took 162.129708ms to configureAuth
	I0520 04:28:15.415234   16800 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:28:15.415340   16800 config.go:182] Loaded profile config "running-upgrade-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:28:15.415379   16800 main.go:141] libmachine: Using SSH client type: native
	I0520 04:28:15.415510   16800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10479e900] 0x1047a1160 <nil>  [] 0s} localhost 52977 <nil> <nil>}
	I0520 04:28:15.415515   16800 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:28:15.481121   16800 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:28:15.481130   16800 buildroot.go:70] root file system type: tmpfs
	I0520 04:28:15.481181   16800 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:28:15.481224   16800 main.go:141] libmachine: Using SSH client type: native
	I0520 04:28:15.481340   16800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10479e900] 0x1047a1160 <nil>  [] 0s} localhost 52977 <nil> <nil>}
	I0520 04:28:15.481377   16800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:28:15.553379   16800 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:28:15.553428   16800 main.go:141] libmachine: Using SSH client type: native
	I0520 04:28:15.553551   16800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10479e900] 0x1047a1160 <nil>  [] 0s} localhost 52977 <nil> <nil>}
	I0520 04:28:15.553559   16800 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:28:15.621531   16800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:28:15.621543   16800 machine.go:97] duration metric: took 572.256583ms to provisionDockerMachine
	I0520 04:28:15.621549   16800 start.go:293] postStartSetup for "running-upgrade-901000" (driver="qemu2")
	I0520 04:28:15.621555   16800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:28:15.621603   16800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:28:15.621611   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	I0520 04:28:15.657258   16800 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:28:15.658407   16800 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 04:28:15.658416   16800 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18932-14402/.minikube/addons for local assets ...
	I0520 04:28:15.658486   16800 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18932-14402/.minikube/files for local assets ...
	I0520 04:28:15.658578   16800 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem -> 148952.pem in /etc/ssl/certs
	I0520 04:28:15.658668   16800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:28:15.661194   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem --> /etc/ssl/certs/148952.pem (1708 bytes)
	I0520 04:28:15.667728   16800 start.go:296] duration metric: took 46.174792ms for postStartSetup
	I0520 04:28:15.667742   16800 fix.go:56] duration metric: took 629.807875ms for fixHost
	I0520 04:28:15.667774   16800 main.go:141] libmachine: Using SSH client type: native
	I0520 04:28:15.667880   16800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10479e900] 0x1047a1160 <nil>  [] 0s} localhost 52977 <nil> <nil>}
	I0520 04:28:15.667884   16800 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 04:28:15.731466   16800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716204496.205797013
	
	I0520 04:28:15.731476   16800 fix.go:216] guest clock: 1716204496.205797013
	I0520 04:28:15.731480   16800 fix.go:229] Guest: 2024-05-20 04:28:16.205797013 -0700 PDT Remote: 2024-05-20 04:28:15.667744 -0700 PDT m=+0.732917792 (delta=538.053013ms)
	I0520 04:28:15.731492   16800 fix.go:200] guest clock delta is within tolerance: 538.053013ms
	I0520 04:28:15.731495   16800 start.go:83] releasing machines lock for "running-upgrade-901000", held for 693.5715ms
	I0520 04:28:15.731557   16800 ssh_runner.go:195] Run: cat /version.json
	I0520 04:28:15.731559   16800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:28:15.731566   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	I0520 04:28:15.731577   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	W0520 04:28:15.732154   16800 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53086->127.0.0.1:52977: read: connection reset by peer
	I0520 04:28:15.732172   16800 retry.go:31] will retry after 287.330189ms: ssh: handshake failed: read tcp 127.0.0.1:53086->127.0.0.1:52977: read: connection reset by peer
	W0520 04:28:16.057264   16800 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 04:28:16.057334   16800 ssh_runner.go:195] Run: systemctl --version
	I0520 04:28:16.059171   16800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 04:28:16.060953   16800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:28:16.060979   16800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 04:28:16.063701   16800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 04:28:16.068065   16800 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:28:16.068074   16800 start.go:494] detecting cgroup driver to use...
	I0520 04:28:16.068174   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:28:16.073597   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 04:28:16.076402   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:28:16.079614   16800 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:28:16.079636   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:28:16.082630   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:28:16.087446   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:28:16.090901   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:28:16.093755   16800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:28:16.096588   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:28:16.099610   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:28:16.103008   16800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:28:16.106213   16800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:28:16.108903   16800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:28:16.111468   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:28:16.202040   16800 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:28:16.208483   16800 start.go:494] detecting cgroup driver to use...
	I0520 04:28:16.208554   16800 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:28:16.219060   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:28:16.227541   16800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:28:16.235463   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:28:16.240191   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:28:16.244854   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:28:16.250209   16800 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:28:16.251477   16800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:28:16.254644   16800 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:28:16.259704   16800 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:28:16.342751   16800 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:28:16.435338   16800 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:28:16.435423   16800 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:28:16.440479   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:28:16.514602   16800 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:28:30.049069   16800 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.534612s)
	I0520 04:28:30.049136   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:28:30.053773   16800 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 04:28:30.060764   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:28:30.065350   16800 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:28:30.133294   16800 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:28:30.208234   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:28:30.284372   16800 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:28:30.290399   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:28:30.295030   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:28:30.371667   16800 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:28:30.415567   16800 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:28:30.415642   16800 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:28:30.418725   16800 start.go:562] Will wait 60s for crictl version
	I0520 04:28:30.418765   16800 ssh_runner.go:195] Run: which crictl
	I0520 04:28:30.420120   16800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:28:30.432415   16800 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 04:28:30.432482   16800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:28:30.445632   16800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:28:30.461173   16800 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 04:28:30.461242   16800 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 04:28:30.462591   16800 kubeadm.go:877] updating cluster {Name:running-upgrade-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53009 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-901000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 04:28:30.462634   16800 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:28:30.462676   16800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:28:30.472724   16800 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:28:30.472732   16800 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:28:30.472777   16800 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:28:30.475737   16800 ssh_runner.go:195] Run: which lz4
	I0520 04:28:30.477080   16800 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 04:28:30.478292   16800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 04:28:30.478301   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 04:28:31.131021   16800 docker.go:649] duration metric: took 653.979042ms to copy over tarball
	I0520 04:28:31.131093   16800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:28:32.221511   16800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.090415625s)
	I0520 04:28:32.221524   16800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:28:32.237005   16800 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:28:32.239832   16800 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 04:28:32.244858   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:28:32.327634   16800 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:28:34.314324   16800 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.986696958s)
	I0520 04:28:34.314434   16800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:28:34.331160   16800 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:28:34.331171   16800 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:28:34.331176   16800 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 04:28:34.344331   16800 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:28:34.344331   16800 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:28:34.344396   16800 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:28:34.346056   16800 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:28:34.346922   16800 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:28:34.347025   16800 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:28:34.347067   16800 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:28:34.347117   16800 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:28:34.354115   16800 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:28:34.354203   16800 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:28:34.354297   16800 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:28:34.354343   16800 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:28:34.354498   16800 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:28:34.354511   16800 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:28:34.354718   16800 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:28:34.354924   16800 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:28:34.735526   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:28:34.748038   16800 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 04:28:34.748059   16800 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:28:34.748109   16800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:28:34.750116   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 04:28:34.750379   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 04:28:34.765618   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 04:28:34.767927   16800 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 04:28:34.767953   16800 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 04:28:34.768002   16800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 04:28:34.770921   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:28:34.773927   16800 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 04:28:34.773945   16800 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:28:34.773982   16800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 04:28:34.785184   16800 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 04:28:34.785204   16800 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:28:34.785233   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:28:34.785253   16800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:28:34.785321   16800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0520 04:28:34.796177   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:28:34.796294   16800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:28:34.798607   16800 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 04:28:34.798622   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 04:28:34.798623   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 04:28:34.798643   16800 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 04:28:34.798649   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 04:28:34.803356   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:28:34.818957   16800 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 04:28:34.818970   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 04:28:34.825916   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:28:34.844984   16800 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 04:28:34.845017   16800 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:28:34.845069   16800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0520 04:28:34.862234   16800 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:28:34.862365   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:28:34.905646   16800 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 04:28:34.905686   16800 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 04:28:34.905703   16800 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:28:34.905755   16800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:28:34.912881   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 04:28:34.915120   16800 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 04:28:34.915140   16800 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:28:34.915189   16800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:28:34.938500   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 04:28:34.957503   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 04:28:34.958550   16800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:28:34.971206   16800 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 04:28:34.971237   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 04:28:35.057948   16800 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:28:35.057963   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 04:28:35.114354   16800 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 04:28:35.114376   16800 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:28:35.114383   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0520 04:28:35.222189   16800 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:28:35.222305   16800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:28:35.268454   16800 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0520 04:28:35.268471   16800 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 04:28:35.268491   16800 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:28:35.268552   16800 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:28:36.219253   16800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:28:36.219615   16800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:28:36.225673   16800 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 04:28:36.225706   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 04:28:36.274119   16800 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:28:36.274133   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 04:28:36.502722   16800 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 04:28:36.502759   16800 cache_images.go:92] duration metric: took 2.1716035s to LoadCachedImages
	W0520 04:28:36.502794   16800 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0520 04:28:36.502799   16800 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 04:28:36.502871   16800 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-901000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-901000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:28:36.502930   16800 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:28:36.516062   16800 cni.go:84] Creating CNI manager for ""
	I0520 04:28:36.516074   16800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:28:36.516089   16800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:28:36.516102   16800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-901000 NodeName:running-upgrade-901000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:28:36.516171   16800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-901000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:28:36.516239   16800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 04:28:36.519469   16800 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:28:36.519500   16800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 04:28:36.522803   16800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 04:28:36.527915   16800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:28:36.533114   16800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 04:28:36.538028   16800 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 04:28:36.539237   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:28:36.620282   16800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:28:36.625517   16800 certs.go:68] Setting up /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000 for IP: 10.0.2.15
	I0520 04:28:36.625525   16800 certs.go:194] generating shared ca certs ...
	I0520 04:28:36.625534   16800 certs.go:226] acquiring lock for ca certs: {Name:mk68bd2733d4beefbc93944c03f6a3a33405f849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:28:36.625776   16800 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.key
	I0520 04:28:36.625815   16800 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.key
	I0520 04:28:36.625823   16800 certs.go:256] generating profile certs ...
	I0520 04:28:36.625882   16800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/client.key
	I0520 04:28:36.625894   16800 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.key.83835269
	I0520 04:28:36.625903   16800 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.crt.83835269 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 04:28:36.769007   16800 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.crt.83835269 ...
	I0520 04:28:36.769016   16800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.crt.83835269: {Name:mk218db31f4c699ca00c28c7d022a26f53c0a571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:28:36.769267   16800 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.key.83835269 ...
	I0520 04:28:36.769272   16800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.key.83835269: {Name:mkb1e11e8da5047bf8679cecdb23bc2b92f8f7b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:28:36.769405   16800 certs.go:381] copying /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.crt.83835269 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.crt
	I0520 04:28:36.769532   16800 certs.go:385] copying /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.key.83835269 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.key
	I0520 04:28:36.769657   16800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/proxy-client.key
	I0520 04:28:36.769784   16800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895.pem (1338 bytes)
	W0520 04:28:36.769805   16800 certs.go:480] ignoring /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895_empty.pem, impossibly tiny 0 bytes
	I0520 04:28:36.769811   16800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 04:28:36.769854   16800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem (1078 bytes)
	I0520 04:28:36.769878   16800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem (1123 bytes)
	I0520 04:28:36.769899   16800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem (1679 bytes)
	I0520 04:28:36.769951   16800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem (1708 bytes)
	I0520 04:28:36.770320   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:28:36.777662   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 04:28:36.784325   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:28:36.791402   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 04:28:36.798890   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 04:28:36.806526   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 04:28:36.813599   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:28:36.820552   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 04:28:36.827394   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:28:36.834550   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895.pem --> /usr/share/ca-certificates/14895.pem (1338 bytes)
	I0520 04:28:36.841856   16800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem --> /usr/share/ca-certificates/148952.pem (1708 bytes)
	I0520 04:28:36.848485   16800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:28:36.853385   16800 ssh_runner.go:195] Run: openssl version
	I0520 04:28:36.855117   16800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:28:36.858637   16800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:28:36.860132   16800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:28:36.860150   16800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:28:36.861915   16800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:28:36.864536   16800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14895.pem && ln -fs /usr/share/ca-certificates/14895.pem /etc/ssl/certs/14895.pem"
	I0520 04:28:36.867607   16800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14895.pem
	I0520 04:28:36.869081   16800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 11:16 /usr/share/ca-certificates/14895.pem
	I0520 04:28:36.869101   16800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14895.pem
	I0520 04:28:36.870845   16800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14895.pem /etc/ssl/certs/51391683.0"
	I0520 04:28:36.874073   16800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148952.pem && ln -fs /usr/share/ca-certificates/148952.pem /etc/ssl/certs/148952.pem"
	I0520 04:28:36.877157   16800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148952.pem
	I0520 04:28:36.878535   16800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 11:16 /usr/share/ca-certificates/148952.pem
	I0520 04:28:36.878552   16800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148952.pem
	I0520 04:28:36.880540   16800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148952.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:28:36.883384   16800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:28:36.884913   16800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 04:28:36.886619   16800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 04:28:36.888509   16800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 04:28:36.890236   16800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 04:28:36.892253   16800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 04:28:36.894115   16800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 04:28:36.895995   16800 kubeadm.go:391] StartCluster: {Name:running-upgrade-901000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53009 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-901000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:28:36.896058   16800 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:28:36.906496   16800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 04:28:36.909960   16800 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 04:28:36.909966   16800 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 04:28:36.909969   16800 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 04:28:36.909991   16800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 04:28:36.912760   16800 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:28:36.912795   16800 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-901000" does not appear in /Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:28:36.912809   16800 kubeconfig.go:62] /Users/jenkins/minikube-integration/18932-14402/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-901000" cluster setting kubeconfig missing "running-upgrade-901000" context setting]
	I0520 04:28:36.912985   16800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/kubeconfig: {Name:mk5af4624218472b4409997d6f105a56e728f278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:28:36.913890   16800 kapi.go:59] client config for running-upgrade-901000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/client.key", CAFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b28580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:28:36.914698   16800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 04:28:36.917409   16800 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-901000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 04:28:36.917414   16800 kubeadm.go:1154] stopping kube-system containers ...
	I0520 04:28:36.917457   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:28:36.928166   16800 docker.go:483] Stopping containers: [1ad2463ab24e 3e56790a0558 59ec5daf7698 f23eed61d692 4fdcae446f05 5fa1a21b1c90 3efe71192e81 34373842c11a 2891142f3cee b9cceac11f37 bdd563daedf2 0bbcba9c9766 9f1db3a4a9d7 d59411790846 d9cd3d369600]
	I0520 04:28:36.928229   16800 ssh_runner.go:195] Run: docker stop 1ad2463ab24e 3e56790a0558 59ec5daf7698 f23eed61d692 4fdcae446f05 5fa1a21b1c90 3efe71192e81 34373842c11a 2891142f3cee b9cceac11f37 bdd563daedf2 0bbcba9c9766 9f1db3a4a9d7 d59411790846 d9cd3d369600
	I0520 04:28:36.942930   16800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 04:28:37.019195   16800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:28:37.023296   16800 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 May 20 11:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May 20 11:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May 20 11:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May 20 11:28 /etc/kubernetes/scheduler.conf
	
	I0520 04:28:37.023325   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/admin.conf
	I0520 04:28:37.026812   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:28:37.026842   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:28:37.030373   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/kubelet.conf
	I0520 04:28:37.033299   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:28:37.033319   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:28:37.035991   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/controller-manager.conf
	I0520 04:28:37.039110   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:28:37.039129   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:28:37.042205   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/scheduler.conf
	I0520 04:28:37.044822   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:28:37.044844   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:28:37.047480   16800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:28:37.050807   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:28:37.072039   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:28:37.573620   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:28:37.951790   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:28:37.999514   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:28:38.028774   16800 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:28:38.028847   16800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:28:38.531355   16800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:28:39.030933   16800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:28:39.035027   16800 api_server.go:72] duration metric: took 1.006267792s to wait for apiserver process to appear ...
	I0520 04:28:39.035037   16800 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:28:39.035045   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:28:44.037354   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:28:44.037440   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:28:49.038241   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:28:49.038327   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:28:54.039361   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:28:54.039409   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:28:59.040527   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:28:59.040616   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:04.042234   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:04.042296   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:09.044295   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:09.044378   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:14.046964   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:14.047052   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:19.049291   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:19.049398   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:24.051198   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:24.051298   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:29.053864   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:29.053934   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:34.055661   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:34.055739   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:39.056367   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:39.056606   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:29:39.082121   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:29:39.082245   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:29:39.099100   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:29:39.099196   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:29:39.111778   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:29:39.111849   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:29:39.122764   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:29:39.122836   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:29:39.133115   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:29:39.133191   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:29:39.143718   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:29:39.143798   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:29:39.159823   16800 logs.go:276] 0 containers: []
	W0520 04:29:39.159835   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:29:39.159885   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:29:39.170365   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:29:39.170386   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:29:39.170393   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:29:39.185384   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:29:39.185397   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:29:39.202420   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:29:39.202431   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:29:39.213439   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:29:39.213454   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:29:39.218229   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:29:39.218235   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:29:39.289031   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:29:39.289041   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:29:39.327139   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:29:39.327152   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:29:39.340617   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:29:39.340630   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:29:39.355524   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:29:39.355538   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:29:39.372910   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:29:39.372923   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:29:39.384037   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:29:39.384048   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:29:39.409230   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:29:39.409236   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:29:39.445700   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:29:39.445707   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:29:39.456983   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:29:39.456995   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:29:39.472797   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:29:39.472821   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:29:39.487152   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:29:39.487162   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:29:39.501831   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:29:39.501840   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:29:42.016192   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:47.018934   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:47.019345   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:29:47.057540   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:29:47.057665   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:29:47.078467   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:29:47.078562   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:29:47.093082   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:29:47.093145   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:29:47.105481   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:29:47.105546   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:29:47.116331   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:29:47.116395   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:29:47.127072   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:29:47.127152   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:29:47.136923   16800 logs.go:276] 0 containers: []
	W0520 04:29:47.136936   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:29:47.136986   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:29:47.147365   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:29:47.147386   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:29:47.147391   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:29:47.164928   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:29:47.164939   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:29:47.180594   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:29:47.180607   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:29:47.191760   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:29:47.191769   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:29:47.216090   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:29:47.216100   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:29:47.229635   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:29:47.229646   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:29:47.243813   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:29:47.243824   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:29:47.255005   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:29:47.255017   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:29:47.269479   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:29:47.269492   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:29:47.280907   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:29:47.280922   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:29:47.292884   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:29:47.292896   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:29:47.336458   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:29:47.336472   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:29:47.373430   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:29:47.373440   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:29:47.384728   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:29:47.384740   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:29:47.396524   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:29:47.396538   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:29:47.433453   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:29:47.433461   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:29:47.438019   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:29:47.438029   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:29:49.954171   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:29:54.959926   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:29:54.960315   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:29:55.001317   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:29:55.001448   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:29:55.023875   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:29:55.023993   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:29:55.040042   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:29:55.040117   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:29:55.052826   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:29:55.052904   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:29:55.063602   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:29:55.063670   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:29:55.074201   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:29:55.074266   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:29:55.084594   16800 logs.go:276] 0 containers: []
	W0520 04:29:55.084610   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:29:55.084668   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:29:55.097257   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:29:55.097276   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:29:55.097281   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:29:55.123738   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:29:55.123748   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:29:55.160321   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:29:55.160329   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:29:55.164592   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:29:55.164600   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:29:55.178716   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:29:55.178727   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:29:55.190700   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:29:55.190709   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:29:55.206237   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:29:55.206249   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:29:55.220571   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:29:55.220583   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:29:55.232698   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:29:55.232711   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:29:55.268903   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:29:55.268915   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:29:55.305864   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:29:55.305876   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:29:55.321045   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:29:55.321055   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:29:55.332727   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:29:55.332738   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:29:55.355032   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:29:55.355043   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:29:55.367130   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:29:55.367143   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:29:55.384458   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:29:55.384469   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:29:55.396033   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:29:55.396042   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:29:57.909621   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:02.912364   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:02.912590   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:02.930249   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:02.930336   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:02.943773   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:02.943846   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:02.954708   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:02.954773   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:02.964902   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:02.964964   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:02.974828   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:02.974897   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:02.984972   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:02.985030   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:02.995410   16800 logs.go:276] 0 containers: []
	W0520 04:30:02.995420   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:02.995471   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:03.005633   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:03.005651   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:03.005657   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:03.020512   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:03.020524   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:03.034795   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:03.034810   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:03.069385   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:03.069397   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:03.093315   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:03.093329   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:03.107558   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:03.107569   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:03.119305   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:03.119317   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:03.132410   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:03.132423   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:03.147025   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:03.147036   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:03.158122   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:03.158131   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:03.162841   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:03.162849   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:03.183831   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:03.183844   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:03.197282   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:03.197294   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:03.236217   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:03.236229   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:30:03.247126   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:03.247138   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:03.271532   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:03.271540   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:03.283117   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:03.283129   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:05.821432   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:10.824334   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:10.824826   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:10.861348   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:10.861486   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:10.883240   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:10.883354   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:10.898934   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:10.899011   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:10.911267   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:10.911334   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:10.921738   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:10.921800   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:10.932771   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:10.932872   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:10.943605   16800 logs.go:276] 0 containers: []
	W0520 04:30:10.943619   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:10.943684   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:10.955850   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:10.955873   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:10.955878   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:30:10.969052   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:10.969060   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:10.994200   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:10.994208   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:11.005785   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:11.005797   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:11.042126   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:11.042133   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:11.056041   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:11.056051   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:11.067931   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:11.067942   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:11.085348   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:11.085363   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:11.096559   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:11.096568   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:11.136585   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:11.136596   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:11.148012   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:11.148025   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:11.162775   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:11.162790   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:11.184221   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:11.184230   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:11.220699   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:11.220709   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:11.232108   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:11.232118   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:11.236347   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:11.236354   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:11.250021   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:11.250034   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:13.767737   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:18.770415   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:18.770849   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:18.811331   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:18.811454   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:18.839231   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:18.839328   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:18.854102   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:18.854166   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:18.875225   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:18.875298   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:18.885769   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:18.885829   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:18.896388   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:18.896459   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:18.906984   16800 logs.go:276] 0 containers: []
	W0520 04:30:18.906993   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:18.907042   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:18.917658   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:18.917676   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:18.917682   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:18.929647   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:18.929659   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:18.946820   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:18.946832   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:30:18.957865   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:18.957876   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:18.969163   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:18.969177   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:18.985895   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:18.985905   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:19.000079   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:19.000092   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:19.010983   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:19.010994   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:19.022313   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:19.022326   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:19.058173   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:19.058182   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:19.094470   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:19.094481   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:19.115037   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:19.115050   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:19.129340   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:19.129352   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:19.146372   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:19.146381   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:19.161590   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:19.161602   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:19.187106   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:19.187116   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:19.191486   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:19.191495   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:21.729894   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:26.732581   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:26.732973   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:26.765218   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:26.765348   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:26.787347   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:26.787436   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:26.807222   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:26.807296   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:26.818163   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:26.818233   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:26.832766   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:26.832834   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:26.843183   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:26.843257   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:26.853884   16800 logs.go:276] 0 containers: []
	W0520 04:30:26.853896   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:26.853953   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:26.864243   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:26.864262   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:26.864267   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:26.899439   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:26.899450   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:26.912699   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:26.912712   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:26.924158   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:26.924168   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:26.935819   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:26.935833   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:26.940044   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:26.940051   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:26.954275   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:26.954286   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:26.965371   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:26.965385   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:26.979555   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:26.979567   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:26.994624   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:26.994634   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:30:27.005740   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:27.005750   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:27.020662   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:27.020674   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:27.057712   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:27.057718   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:27.092393   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:27.092404   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:27.107385   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:27.107397   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:27.119048   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:27.119059   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:27.136606   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:27.136617   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:29.664116   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:34.666850   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:34.666907   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:34.678722   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:34.678781   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:34.692880   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:34.692940   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:34.703186   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:34.703231   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:34.713753   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:34.713809   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:34.724334   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:34.724374   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:34.735869   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:34.735925   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:34.747322   16800 logs.go:276] 0 containers: []
	W0520 04:30:34.747330   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:34.747370   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:34.758723   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:34.758737   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:34.758742   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:34.770835   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:34.770845   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:34.785600   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:34.785611   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:34.796919   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:34.796932   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:34.821380   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:34.821389   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:34.858237   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:34.858244   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:34.895644   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:34.895667   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:34.906726   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:34.906738   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:34.918301   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:34.918312   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:30:34.929384   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:34.929397   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:34.941661   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:34.941676   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:34.946498   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:34.946505   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:34.960542   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:34.960552   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:34.994610   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:34.994620   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:35.008780   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:35.008789   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:35.023510   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:35.023525   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:35.040162   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:35.040172   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:37.560061   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:42.562409   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:42.562522   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:42.574569   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:42.574645   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:42.585865   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:42.585934   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:42.597001   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:42.597067   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:42.608084   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:42.608158   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:42.619422   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:42.619492   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:42.630770   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:42.630843   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:42.641558   16800 logs.go:276] 0 containers: []
	W0520 04:30:42.641573   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:42.641632   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:42.653159   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:42.653176   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:42.653181   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:42.691312   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:42.691332   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:42.703303   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:42.703314   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:42.744991   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:42.745012   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:42.749923   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:42.749933   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:42.764705   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:42.764717   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:42.776642   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:42.776659   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:42.791527   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:42.791537   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:42.803138   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:42.803148   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:42.838272   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:42.838281   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:42.850221   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:42.850234   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:42.874405   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:42.874414   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:42.888708   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:42.888718   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:42.906284   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:42.906293   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:42.920986   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:42.920996   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:42.937672   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:42.937682   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:30:42.948578   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:42.948593   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:45.462396   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:50.464525   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:50.464670   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:50.475863   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:50.475941   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:50.486683   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:50.486758   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:50.497665   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:50.497730   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:50.508636   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:50.508710   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:50.522088   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:50.522157   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:50.536196   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:50.536267   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:50.546689   16800 logs.go:276] 0 containers: []
	W0520 04:30:50.546700   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:50.546758   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:50.557597   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:50.557614   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:50.557619   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:50.575806   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:50.575817   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:30:50.587636   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:50.587647   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:50.624558   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:50.624566   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:50.643178   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:50.643189   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:50.657699   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:50.657711   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:50.669227   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:50.669238   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:50.680895   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:50.680907   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:50.716363   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:50.716374   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:50.733013   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:50.733025   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:50.748166   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:50.748176   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:50.763437   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:50.763447   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:50.768095   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:50.768102   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:50.786374   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:50.786384   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:50.811532   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:50.811540   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:50.823438   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:50.823449   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:50.860624   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:50.860635   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:53.374293   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:30:58.376600   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:30:58.377006   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:30:58.420963   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:30:58.421107   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:30:58.442361   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:30:58.442468   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:30:58.455501   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:30:58.455586   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:30:58.467133   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:30:58.467196   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:30:58.477546   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:30:58.477615   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:30:58.488025   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:30:58.488089   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:30:58.498407   16800 logs.go:276] 0 containers: []
	W0520 04:30:58.498419   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:30:58.498476   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:30:58.508565   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:30:58.508581   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:30:58.508593   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:30:58.545312   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:30:58.545322   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:30:58.579060   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:30:58.579074   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:30:58.593866   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:30:58.593876   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:30:58.629845   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:30:58.629854   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:30:58.654643   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:30:58.654650   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:30:58.676398   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:30:58.676414   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:30:58.690740   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:30:58.690754   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:30:58.705114   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:30:58.705124   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:30:58.720588   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:30:58.720601   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:30:58.737790   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:30:58.737800   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:30:58.752764   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:30:58.752777   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:30:58.764002   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:30:58.764015   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:30:58.768439   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:30:58.768448   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:30:58.782717   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:30:58.782729   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:30:58.794075   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:30:58.794086   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:30:58.805072   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:30:58.805085   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:01.318170   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:06.321005   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:06.321481   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:06.365746   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:06.365892   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:06.387502   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:06.387601   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:06.402538   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:06.402614   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:06.415296   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:06.415377   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:06.426515   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:06.426590   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:06.437502   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:06.437569   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:06.457756   16800 logs.go:276] 0 containers: []
	W0520 04:31:06.457768   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:06.457829   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:06.468948   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:06.468967   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:06.468973   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:06.506897   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:06.506914   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:06.546756   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:06.546782   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:06.560986   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:06.561004   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:06.575129   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:06.575145   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:06.593864   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:06.593875   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:06.611062   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:06.611076   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:06.616069   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:06.616078   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:06.657695   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:06.657708   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:06.671934   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:06.671947   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:06.683412   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:06.683425   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:06.694960   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:06.694973   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:06.710867   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:06.710879   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:06.728079   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:06.728090   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:06.752984   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:06.752992   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:06.767089   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:06.767102   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:06.782812   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:06.782822   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:09.299676   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:14.301079   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:14.301466   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:14.340808   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:14.340943   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:14.362572   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:14.362670   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:14.377614   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:14.377689   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:14.390101   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:14.390173   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:14.400825   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:14.400888   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:14.411587   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:14.411660   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:14.421763   16800 logs.go:276] 0 containers: []
	W0520 04:31:14.421776   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:14.421835   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:14.432683   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:14.432703   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:14.432708   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:14.444899   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:14.444910   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:14.481886   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:14.481894   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:14.497242   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:14.497251   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:14.510809   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:14.510821   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:14.522926   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:14.522936   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:14.547724   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:14.547737   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:14.552274   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:14.552281   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:14.565915   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:14.565925   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:14.578228   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:14.578238   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:14.589879   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:14.589893   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:14.625613   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:14.625624   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:14.640152   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:14.640166   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:14.655019   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:14.655030   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:14.673357   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:14.673368   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:14.685408   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:14.685420   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:14.721968   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:14.721982   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:17.236269   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:22.238417   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:22.238637   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:22.250117   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:22.250188   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:22.261884   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:22.261959   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:22.273014   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:22.273085   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:22.284416   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:22.284501   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:22.294877   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:22.294951   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:22.306224   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:22.306296   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:22.317808   16800 logs.go:276] 0 containers: []
	W0520 04:31:22.317821   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:22.317878   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:22.328489   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:22.328509   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:22.328516   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:22.346854   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:22.346868   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:22.363919   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:22.363933   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:22.380604   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:22.380618   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:22.406402   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:22.406423   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:22.445640   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:22.445658   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:22.485876   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:22.485899   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:22.502155   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:22.502171   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:22.515444   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:22.515460   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:22.530283   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:22.530298   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:22.556780   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:22.556794   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:22.569528   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:22.569539   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:22.584349   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:22.584363   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:22.598446   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:22.598462   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:22.610843   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:22.610858   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:22.615697   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:22.615710   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:22.652201   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:22.652213   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:25.170976   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:30.173746   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:30.174234   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:30.212872   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:30.213008   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:30.234836   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:30.234947   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:30.250428   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:30.250495   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:30.262536   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:30.262606   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:30.273080   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:30.273146   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:30.283737   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:30.283811   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:30.298813   16800 logs.go:276] 0 containers: []
	W0520 04:31:30.298824   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:30.298875   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:30.309032   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:30.309050   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:30.309055   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:30.320467   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:30.320481   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:30.357731   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:30.357741   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:30.362115   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:30.362121   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:30.398208   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:30.398222   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:30.415613   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:30.415626   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:30.427942   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:30.427953   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:30.449723   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:30.449733   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:30.464132   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:30.464142   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:30.479182   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:30.479191   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:30.494866   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:30.494878   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:30.519439   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:30.519448   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:30.533266   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:30.533277   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:30.571540   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:30.571554   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:30.586079   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:30.586092   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:30.600470   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:30.600483   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:30.611350   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:30.611362   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:33.124935   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:38.127184   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:38.127295   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:38.139481   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:38.139541   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:38.156108   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:38.156185   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:38.167588   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:38.167660   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:38.179186   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:38.179253   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:38.189529   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:38.189592   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:38.200094   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:38.200162   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:38.210243   16800 logs.go:276] 0 containers: []
	W0520 04:31:38.210253   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:38.210310   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:38.220167   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:38.220187   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:38.220192   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:38.235619   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:38.235628   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:38.250525   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:38.250537   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:38.261601   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:38.261613   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:38.273580   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:38.273591   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:38.308652   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:38.308662   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:38.321917   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:38.321927   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:38.333456   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:38.333467   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:38.348099   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:38.348113   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:38.370597   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:38.370607   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:38.382407   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:38.382419   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:38.419654   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:38.419662   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:38.443805   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:38.443812   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:38.448303   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:38.448312   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:38.464910   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:38.464921   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:38.509440   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:38.509453   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:38.523507   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:38.523520   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:41.034957   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:46.035568   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:46.035664   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:46.046521   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:46.046598   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:46.057474   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:46.057547   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:46.067711   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:46.067778   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:46.078606   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:46.078680   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:46.093300   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:46.093369   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:46.104230   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:46.104299   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:46.119874   16800 logs.go:276] 0 containers: []
	W0520 04:31:46.119886   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:46.119950   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:46.141414   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:46.141432   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:46.141438   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:46.156294   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:46.156304   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:46.167653   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:46.167665   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:46.182917   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:46.182928   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:46.196779   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:46.196789   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:46.212320   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:46.212331   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:46.226186   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:46.226204   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:46.246127   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:46.246148   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:46.288721   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:46.288739   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:46.305322   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:46.305337   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:46.345022   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:46.345041   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:46.360251   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:46.360266   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:46.386749   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:46.386770   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:46.392483   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:46.392500   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:46.406564   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:46.406577   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:46.420521   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:46.420534   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:46.433826   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:46.433840   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:48.976677   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:53.979024   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:53.979551   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:54.019340   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:54.019478   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:54.042146   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:54.042242   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:54.057194   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:54.057284   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:54.069519   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:54.069592   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:54.080321   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:54.080399   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:54.095725   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:54.095818   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:54.107189   16800 logs.go:276] 0 containers: []
	W0520 04:31:54.107201   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:54.107266   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:54.117915   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:54.117931   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:54.117936   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:54.129302   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:54.129317   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:54.145694   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:54.145704   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:54.165195   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:54.165204   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:54.179687   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:54.179696   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:54.203480   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:54.203487   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:54.242093   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:54.242107   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:54.278865   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:54.278880   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:54.297592   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:54.297606   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:54.308232   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:54.308241   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:54.345591   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:54.345602   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:54.362050   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:54.362058   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:54.374189   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:54.374199   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:54.378826   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:54.378833   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:54.393215   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:54.393228   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:54.407919   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:54.407929   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:54.419553   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:54.419562   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:56.933063   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:01.935241   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:01.935354   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:01.946540   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:01.946615   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:01.957444   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:01.957516   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:01.967661   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:01.967729   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:01.978192   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:01.978260   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:01.989024   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:01.989093   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:02.006452   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:02.006515   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:02.016464   16800 logs.go:276] 0 containers: []
	W0520 04:32:02.016479   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:02.016538   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:02.027412   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:02.027430   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:02.027435   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:02.041133   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:02.041147   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:02.056545   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:02.056555   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:02.074734   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:02.074745   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:02.091019   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:02.091030   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:02.095898   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:02.095905   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:02.107878   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:02.107890   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:02.123042   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:02.123056   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:02.159714   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:02.159726   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:02.173872   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:02.173883   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:02.209924   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:02.209948   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:02.225882   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:02.225895   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:02.251564   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:02.251584   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:02.263471   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:02.263482   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:02.299288   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:02.299301   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:02.314962   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:02.314977   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:02.326958   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:02.326971   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:04.848973   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:09.851171   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:09.851362   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:09.872784   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:09.872889   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:09.895702   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:09.895772   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:09.910601   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:09.910661   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:09.921166   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:09.921236   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:09.931859   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:09.931935   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:09.942284   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:09.942346   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:09.952771   16800 logs.go:276] 0 containers: []
	W0520 04:32:09.957150   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:09.957263   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:09.967594   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:09.967614   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:09.967620   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:09.978921   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:09.978932   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:09.991214   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:09.991225   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:10.028825   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:10.028834   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:10.069322   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:10.069332   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:10.080265   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:10.080275   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:10.093357   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:10.093368   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:10.105499   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:10.105509   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:10.140398   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:10.140408   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:10.155425   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:10.155436   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:10.172674   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:10.172684   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:10.189968   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:10.189980   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:10.205077   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:10.205087   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:10.216802   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:10.216810   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:10.239165   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:10.239173   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:10.243683   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:10.243688   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:10.261266   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:10.261277   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:12.777609   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:17.779957   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:17.780099   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:17.798404   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:17.798485   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:17.809563   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:17.809629   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:17.819738   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:17.819799   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:17.830407   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:17.830474   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:17.840958   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:17.841022   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:17.851095   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:17.851154   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:17.861822   16800 logs.go:276] 0 containers: []
	W0520 04:32:17.861833   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:17.861888   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:17.872690   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:17.872709   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:17.872715   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:17.910552   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:17.910560   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:17.915410   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:17.915417   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:17.949844   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:17.949855   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:17.965308   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:17.965318   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:17.983398   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:17.983410   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:17.998553   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:17.998564   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:18.014292   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:18.014304   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:18.031695   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:18.031710   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:18.048932   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:18.048942   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:18.060752   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:18.060763   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:18.082779   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:18.082785   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:18.096493   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:18.096504   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:18.132844   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:18.132853   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:18.151394   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:18.151405   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:18.165777   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:18.165787   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:18.177140   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:18.177151   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:20.688697   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:25.691336   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:25.691447   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:25.702568   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:25.702637   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:25.712998   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:25.713073   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:25.723202   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:25.723270   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:25.734104   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:25.734173   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:25.744624   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:25.744684   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:25.755030   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:25.755096   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:25.765194   16800 logs.go:276] 0 containers: []
	W0520 04:32:25.765205   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:25.765265   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:25.775524   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:25.775547   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:25.775554   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:25.810869   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:25.810879   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:25.824896   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:25.824908   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:25.835872   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:25.835881   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:25.848949   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:25.848960   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:25.863264   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:25.863275   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:25.876165   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:25.876175   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:25.912141   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:25.912148   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:25.934665   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:25.934674   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:25.970423   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:25.970432   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:25.982342   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:25.982353   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:25.998932   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:25.998945   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:26.017297   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:26.017307   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:26.028904   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:26.028914   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:26.033240   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:26.033250   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:26.052322   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:26.052332   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:26.067840   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:26.067853   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:28.592941   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:33.595101   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:33.595257   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:33.609945   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:33.610033   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:33.622383   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:33.622452   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:33.632838   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:33.632910   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:33.643901   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:33.643978   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:33.654345   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:33.654415   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:33.665025   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:33.665093   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:33.674780   16800 logs.go:276] 0 containers: []
	W0520 04:32:33.674790   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:33.674849   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:33.685214   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:33.685232   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:33.685237   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:33.699740   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:33.699752   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:33.714270   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:33.714279   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:33.725821   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:33.725833   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:33.739600   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:33.739609   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:33.753464   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:33.753473   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:33.775782   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:33.775795   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:33.789135   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:33.789150   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:33.809394   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:33.809406   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:33.844421   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:33.844435   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:33.880325   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:33.880337   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:33.918378   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:33.918395   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:33.923237   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:33.923244   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:33.935736   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:33.935753   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:33.957680   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:33.957689   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:33.970058   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:33.970070   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:33.987156   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:33.987166   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:36.500486   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:41.502764   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:41.502841   16800 kubeadm.go:591] duration metric: took 4m4.595805291s to restartPrimaryControlPlane
	W0520 04:32:41.502893   16800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 04:32:41.502909   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 04:32:42.522686   16800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.019777375s)
	I0520 04:32:42.522747   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:32:42.527618   16800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:32:42.530313   16800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:32:42.533084   16800 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:32:42.533090   16800 kubeadm.go:156] found existing configuration files:
	
	I0520 04:32:42.533112   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/admin.conf
	I0520 04:32:42.535454   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:32:42.535472   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:32:42.538341   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/kubelet.conf
	I0520 04:32:42.541436   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:32:42.541462   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:32:42.544300   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/controller-manager.conf
	I0520 04:32:42.546761   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:32:42.546780   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:32:42.549898   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/scheduler.conf
	I0520 04:32:42.553236   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:32:42.553262   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:32:42.555958   16800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:32:42.573036   16800 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 04:32:42.573067   16800 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:32:42.620487   16800 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:32:42.620553   16800 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:32:42.620620   16800 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:32:42.672456   16800 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:32:42.676490   16800 out.go:204]   - Generating certificates and keys ...
	I0520 04:32:42.676521   16800 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:32:42.676551   16800 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:32:42.676595   16800 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 04:32:42.676627   16800 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 04:32:42.676754   16800 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 04:32:42.676789   16800 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 04:32:42.676817   16800 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 04:32:42.676846   16800 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 04:32:42.676894   16800 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 04:32:42.676944   16800 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 04:32:42.676965   16800 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 04:32:42.676996   16800 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:32:42.715482   16800 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:32:42.882025   16800 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:32:42.919318   16800 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:32:43.029293   16800 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:32:43.060857   16800 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:32:43.061201   16800 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:32:43.061222   16800 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:32:43.147988   16800 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:32:43.152132   16800 out.go:204]   - Booting up control plane ...
	I0520 04:32:43.152179   16800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:32:43.152215   16800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:32:43.152315   16800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:32:43.152354   16800 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:32:43.152501   16800 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 04:32:47.658424   16800 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.506068 seconds
	I0520 04:32:47.658497   16800 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:32:47.662358   16800 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:32:48.188197   16800 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:32:48.188576   16800 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-901000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:32:48.690870   16800 kubeadm.go:309] [bootstrap-token] Using token: 942xcg.pk2a1901m1kg8gvx
	I0520 04:32:48.694862   16800 out.go:204]   - Configuring RBAC rules ...
	I0520 04:32:48.694920   16800 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:32:48.696939   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:32:48.702890   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:32:48.703800   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:32:48.704813   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:32:48.705686   16800 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:32:48.710453   16800 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:32:48.853003   16800 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:32:49.104357   16800 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:32:49.105069   16800 kubeadm.go:309] 
	I0520 04:32:49.105107   16800 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:32:49.105110   16800 kubeadm.go:309] 
	I0520 04:32:49.105152   16800 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:32:49.105158   16800 kubeadm.go:309] 
	I0520 04:32:49.105201   16800 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:32:49.105267   16800 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:32:49.105310   16800 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:32:49.105314   16800 kubeadm.go:309] 
	I0520 04:32:49.105345   16800 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:32:49.105348   16800 kubeadm.go:309] 
	I0520 04:32:49.105387   16800 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:32:49.105394   16800 kubeadm.go:309] 
	I0520 04:32:49.105439   16800 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:32:49.105483   16800 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:32:49.105530   16800 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:32:49.105534   16800 kubeadm.go:309] 
	I0520 04:32:49.105590   16800 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:32:49.105634   16800 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:32:49.105638   16800 kubeadm.go:309] 
	I0520 04:32:49.105683   16800 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 942xcg.pk2a1901m1kg8gvx \
	I0520 04:32:49.105770   16800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 \
	I0520 04:32:49.105787   16800 kubeadm.go:309] 	--control-plane 
	I0520 04:32:49.105793   16800 kubeadm.go:309] 
	I0520 04:32:49.105840   16800 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:32:49.105844   16800 kubeadm.go:309] 
	I0520 04:32:49.105888   16800 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 942xcg.pk2a1901m1kg8gvx \
	I0520 04:32:49.105958   16800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 
	I0520 04:32:49.106040   16800 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:32:49.106049   16800 cni.go:84] Creating CNI manager for ""
	I0520 04:32:49.106058   16800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:32:49.109820   16800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 04:32:49.118773   16800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 04:32:49.121798   16800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 04:32:49.126768   16800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:32:49.126844   16800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:32:49.126844   16800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-901000 minikube.k8s.io/updated_at=2024_05_20T04_32_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=running-upgrade-901000 minikube.k8s.io/primary=true
	I0520 04:32:49.174645   16800 kubeadm.go:1107] duration metric: took 47.825667ms to wait for elevateKubeSystemPrivileges
	I0520 04:32:49.174669   16800 ops.go:34] apiserver oom_adj: -16
	W0520 04:32:49.174712   16800 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:32:49.174717   16800 kubeadm.go:393] duration metric: took 4m12.281759292s to StartCluster
	I0520 04:32:49.174727   16800 settings.go:142] acquiring lock: {Name:mkfc25767ac77ec7e329af7eb025d278b3830db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:49.174873   16800 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:32:49.175229   16800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/kubeconfig: {Name:mk5af4624218472b4409997d6f105a56e728f278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:49.175419   16800 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:32:49.178778   16800 out.go:177] * Verifying Kubernetes components...
	I0520 04:32:49.175426   16800 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:32:49.175596   16800 config.go:182] Loaded profile config "running-upgrade-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:32:49.186857   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:32:49.186878   16800 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-901000"
	I0520 04:32:49.186889   16800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-901000"
	I0520 04:32:49.186878   16800 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-901000"
	I0520 04:32:49.186906   16800 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-901000"
	W0520 04:32:49.186910   16800 addons.go:243] addon storage-provisioner should already be in state true
	I0520 04:32:49.186922   16800 host.go:66] Checking if "running-upgrade-901000" exists ...
	I0520 04:32:49.188027   16800 kapi.go:59] client config for running-upgrade-901000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/client.key", CAFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b28580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:32:49.188963   16800 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-901000"
	W0520 04:32:49.188968   16800 addons.go:243] addon default-storageclass should already be in state true
	I0520 04:32:49.188976   16800 host.go:66] Checking if "running-upgrade-901000" exists ...
	I0520 04:32:49.193779   16800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:49.197847   16800 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:32:49.197853   16800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:32:49.197859   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	I0520 04:32:49.198553   16800 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:32:49.198559   16800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:32:49.198563   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	I0520 04:32:49.273936   16800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:32:49.280609   16800 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:32:49.280663   16800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:49.284665   16800 api_server.go:72] duration metric: took 109.2365ms to wait for apiserver process to appear ...
	I0520 04:32:49.284673   16800 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:32:49.284681   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:49.291632   16800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:32:49.294830   16800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:32:54.286847   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:54.286963   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:59.287512   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:59.287547   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:04.288023   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:04.288083   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:09.288889   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:09.288948   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:14.290025   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:14.290081   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:19.291373   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:19.291420   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 04:33:19.630002   16800 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 04:33:19.634407   16800 out.go:177] * Enabled addons: storage-provisioner
	I0520 04:33:19.642304   16800 addons.go:505] duration metric: took 30.467245625s for enable addons: enabled=[storage-provisioner]
	I0520 04:33:24.293120   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:24.293173   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:29.295231   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:29.295283   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:34.297644   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:34.297695   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:39.299927   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:39.299953   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:44.302184   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:44.302244   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:49.303212   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:49.303397   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:49.340926   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:33:49.341016   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:49.355503   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:33:49.355572   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:49.369618   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:33:49.369686   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:49.385105   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:33:49.385162   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:49.395840   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:33:49.395905   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:49.406180   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:33:49.406245   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:49.415953   16800 logs.go:276] 0 containers: []
	W0520 04:33:49.415963   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:49.416017   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:49.426202   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:33:49.426220   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:49.426226   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:49.464691   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:33:49.464703   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:33:49.476623   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:33:49.476634   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:33:49.488301   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:33:49.488313   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:49.500189   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:49.500199   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:49.504817   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:33:49.504827   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:33:49.518899   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:33:49.518909   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:33:49.534977   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:33:49.534987   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:33:49.547218   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:33:49.547230   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:33:49.560785   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:33:49.560794   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:33:49.578288   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:33:49.578298   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:33:49.589416   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:49.589426   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:49.613847   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:49.613855   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:52.148203   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:57.150528   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:57.150696   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:57.170266   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:33:57.170339   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:57.183342   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:33:57.183401   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:57.194640   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:33:57.194714   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:57.205142   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:33:57.205204   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:57.218158   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:33:57.218225   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:57.228581   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:33:57.228636   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:57.239641   16800 logs.go:276] 0 containers: []
	W0520 04:33:57.239650   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:57.239699   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:57.249849   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:33:57.249861   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:57.249866   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:57.284000   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:33:57.284008   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:33:57.296099   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:33:57.296109   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:33:57.314062   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:57.314072   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:57.337854   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:33:57.337863   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:57.348936   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:57.348947   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:57.353910   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:57.353920   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:57.389666   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:33:57.389676   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:33:57.404122   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:33:57.404134   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:33:57.422114   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:33:57.422126   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:33:57.433735   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:33:57.433746   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:33:57.445141   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:33:57.445153   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:33:57.459397   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:33:57.459406   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:33:59.972873   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:04.975097   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:04.975301   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:04.995488   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:04.995576   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:05.010272   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:05.010343   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:05.025456   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:05.025530   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:05.036159   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:05.036230   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:05.046512   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:05.046585   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:05.056532   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:05.056595   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:05.066534   16800 logs.go:276] 0 containers: []
	W0520 04:34:05.066545   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:05.066603   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:05.077063   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:05.077078   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:05.077084   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:05.088833   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:05.088845   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:05.102739   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:05.102751   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:05.137192   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:05.137214   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:05.141743   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:05.141751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:05.153003   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:05.153013   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:05.169321   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:05.169331   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:05.181322   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:05.181334   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:05.195966   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:05.195976   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:05.213983   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:05.213993   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:05.237068   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:05.237075   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:05.273826   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:05.273837   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:05.289343   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:05.289354   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:07.806007   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:12.807866   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:12.808164   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:12.842177   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:12.842310   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:12.861714   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:12.861809   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:12.876389   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:12.876464   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:12.889544   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:12.889625   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:12.905515   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:12.905589   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:12.916906   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:12.916980   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:12.931860   16800 logs.go:276] 0 containers: []
	W0520 04:34:12.931870   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:12.931930   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:12.943222   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:12.943240   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:12.943253   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:12.975724   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:12.975731   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:12.990705   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:12.990714   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:13.004955   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:13.004966   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:13.016717   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:13.016728   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:13.028819   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:13.028831   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:13.043590   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:13.043599   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:13.055073   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:13.055083   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:13.078369   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:13.078384   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:13.090308   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:13.090318   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:13.094689   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:13.094696   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:13.146617   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:13.146634   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:13.159604   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:13.159615   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:15.680002   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:20.682343   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:20.682777   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:20.712876   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:20.713000   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:20.731533   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:20.731624   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:20.746045   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:20.746096   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:20.758085   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:20.758157   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:20.768927   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:20.769002   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:20.779971   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:20.780028   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:20.792143   16800 logs.go:276] 0 containers: []
	W0520 04:34:20.792157   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:20.792206   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:20.802728   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:20.802746   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:20.802751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:20.821179   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:20.821190   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:20.833720   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:20.833731   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:20.845345   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:20.845359   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:20.879705   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:20.879713   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:20.893765   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:20.893775   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:20.907720   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:20.907730   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:20.919620   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:20.919632   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:20.931735   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:20.931745   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:20.951090   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:20.951101   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:20.967596   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:20.967605   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:20.990821   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:20.990830   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:20.995024   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:20.995032   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:23.533513   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:28.535239   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:28.535393   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:28.546907   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:28.546989   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:28.557208   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:28.557271   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:28.567929   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:28.567994   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:28.578179   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:28.578258   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:28.588633   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:28.588706   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:28.599305   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:28.599372   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:28.617351   16800 logs.go:276] 0 containers: []
	W0520 04:34:28.617369   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:28.617433   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:28.634245   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:28.634260   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:28.634266   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:28.645881   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:28.645892   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:28.663813   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:28.663821   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:28.668550   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:28.668559   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:28.702791   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:28.702801   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:28.717093   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:28.717104   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:28.731657   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:28.731670   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:28.743157   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:28.743170   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:28.759106   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:28.759117   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:28.770581   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:28.770592   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:28.782156   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:28.782166   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:28.816871   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:28.816879   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:28.829112   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:28.829123   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:31.355561   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:36.357734   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:36.357894   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:36.370158   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:36.370235   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:36.381184   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:36.381256   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:36.391751   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:36.391822   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:36.406506   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:36.406577   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:36.416554   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:36.416628   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:36.426780   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:36.426846   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:36.436954   16800 logs.go:276] 0 containers: []
	W0520 04:34:36.436964   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:36.437018   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:36.447337   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:36.447353   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:36.447357   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:36.458508   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:36.458521   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:36.481121   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:36.481130   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:36.485489   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:36.485496   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:36.519843   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:36.519857   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:36.534226   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:36.534242   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:36.557155   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:36.557167   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:36.576524   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:36.576534   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:36.594020   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:36.594033   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:36.605403   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:36.605414   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:36.639378   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:36.639386   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:36.653985   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:36.653995   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:36.665308   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:36.665319   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:39.179539   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:44.180425   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:44.180653   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:44.209023   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:44.209128   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:44.225311   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:44.225394   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:44.236957   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:44.237022   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:44.250712   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:44.250782   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:44.261386   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:44.261486   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:44.273022   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:44.273083   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:44.283237   16800 logs.go:276] 0 containers: []
	W0520 04:34:44.283245   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:44.283297   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:44.294003   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:44.294017   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:44.294022   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:44.305701   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:44.305711   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:44.310449   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:44.310456   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:44.346702   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:44.346717   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:44.358818   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:44.358828   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:44.370506   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:44.370519   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:44.386040   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:44.386054   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:44.403426   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:44.403439   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:44.428448   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:44.428456   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:44.462167   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:44.462175   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:44.476419   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:44.476432   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:44.490228   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:44.490239   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:44.507501   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:44.507511   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:47.020535   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:52.022841   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:52.023061   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:52.046768   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:52.046858   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:52.061193   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:52.061278   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:52.073771   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:34:52.073846   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:52.084408   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:52.084478   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:52.095360   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:52.095426   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:52.105994   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:52.106070   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:52.116172   16800 logs.go:276] 0 containers: []
	W0520 04:34:52.116183   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:52.116239   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:52.126926   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:52.126946   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:52.126951   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:52.150927   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:52.150935   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:52.183170   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:52.183177   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:52.221474   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:52.221485   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:52.233076   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:52.233086   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:52.246548   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:52.246558   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:52.258081   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:52.258091   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:52.269630   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:52.269645   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:52.287150   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:52.287165   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:52.292717   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:52.292729   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:52.307356   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:52.307372   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:52.326511   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:34:52.326530   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:34:52.339174   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:34:52.339190   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:34:52.353893   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:52.353907   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:52.366378   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:52.366389   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:54.881520   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:59.883932   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:59.884191   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:59.913099   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:59.913232   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:59.935041   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:59.935123   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:59.948094   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:34:59.948176   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:59.959867   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:59.959930   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:59.969979   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:59.970041   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:59.980277   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:59.980344   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:59.990724   16800 logs.go:276] 0 containers: []
	W0520 04:34:59.990735   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:59.990791   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:00.001741   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:00.001757   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:00.001763   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:00.013719   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:00.013727   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:00.025708   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:00.025718   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:00.042668   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:00.042677   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:00.075089   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:00.075099   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:00.079311   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:00.079320   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:00.090582   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:00.090598   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:00.101854   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:00.101864   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:00.118535   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:00.118547   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:00.132536   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:00.132550   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:00.144668   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:00.144680   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:00.160160   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:00.160174   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:00.184513   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:00.184520   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:00.196222   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:00.196234   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:00.231595   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:00.231606   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:02.745390   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:07.745764   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:07.746001   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:07.768929   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:07.769019   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:07.784171   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:07.784248   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:07.801148   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:07.801222   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:07.812238   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:07.812316   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:07.823259   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:07.823330   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:07.834682   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:07.834748   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:07.844913   16800 logs.go:276] 0 containers: []
	W0520 04:35:07.844924   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:07.844981   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:07.856180   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:07.856210   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:07.856215   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:07.890502   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:07.890520   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:07.901296   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:07.901307   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:07.925834   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:07.925841   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:07.929974   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:07.929983   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:07.947515   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:07.947531   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:07.959114   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:07.959124   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:07.970646   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:07.970656   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:07.985741   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:07.985751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:07.997460   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:07.997473   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:08.009106   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:08.009115   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:08.045967   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:08.045979   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:08.060935   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:08.060943   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:08.073265   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:08.073280   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:08.090551   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:08.090561   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:10.604365   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:15.606667   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:15.606862   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:15.625780   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:15.625862   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:15.639474   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:15.639547   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:15.651104   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:15.651169   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:15.661506   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:15.661570   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:15.671820   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:15.671874   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:15.682148   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:15.682219   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:15.691864   16800 logs.go:276] 0 containers: []
	W0520 04:35:15.691873   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:15.691925   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:15.703618   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:15.703635   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:15.703640   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:15.715373   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:15.715386   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:15.729473   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:15.729486   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:15.746313   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:15.746322   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:15.757993   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:15.758002   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:15.763340   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:15.763348   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:15.799128   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:15.799140   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:15.823532   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:15.823547   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:15.837368   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:15.837376   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:15.855073   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:15.855084   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:15.867303   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:15.867315   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:15.879299   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:15.879307   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:15.890960   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:15.890969   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:15.915944   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:15.915952   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:15.949510   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:15.949517   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:18.463275   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:23.465557   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:23.465662   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:23.478906   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:23.478992   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:23.490304   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:23.490375   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:23.500533   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:23.500604   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:23.511288   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:23.511353   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:23.521468   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:23.521539   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:23.532211   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:23.532280   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:23.542591   16800 logs.go:276] 0 containers: []
	W0520 04:35:23.542600   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:23.542653   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:23.552644   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:23.552664   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:23.552669   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:23.566774   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:23.566784   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:23.581592   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:23.581603   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:23.585798   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:23.585806   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:23.597565   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:23.597577   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:23.612289   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:23.612302   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:23.624262   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:23.624275   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:23.656613   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:23.656623   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:23.681589   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:23.681596   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:23.693174   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:23.693188   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:23.727589   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:23.727603   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:23.744614   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:23.744625   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:23.756094   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:23.756105   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:23.768076   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:23.768088   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:23.780094   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:23.780107   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:26.299491   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:31.301864   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:31.301993   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:31.314417   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:31.314497   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:31.329343   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:31.329416   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:31.340237   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:31.340314   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:31.351161   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:31.351229   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:31.361995   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:31.362067   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:31.372643   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:31.372714   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:31.383117   16800 logs.go:276] 0 containers: []
	W0520 04:35:31.383131   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:31.383186   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:31.393671   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:31.393688   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:31.393693   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:31.426675   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:31.426691   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:31.431271   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:31.431277   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:31.466935   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:31.466948   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:31.484573   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:31.484583   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:31.508011   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:31.508022   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:31.522151   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:31.522160   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:31.533497   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:31.533507   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:31.551576   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:31.551589   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:31.564042   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:31.564053   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:31.577883   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:31.577893   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:31.592203   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:31.592214   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:31.609169   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:31.609181   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:31.620485   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:31.620496   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:31.631970   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:31.631981   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:34.144563   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:39.147224   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:39.147526   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:39.169670   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:39.169771   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:39.185216   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:39.185300   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:39.201141   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:39.201213   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:39.215306   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:39.215372   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:39.226029   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:39.226098   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:39.236717   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:39.236786   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:39.247158   16800 logs.go:276] 0 containers: []
	W0520 04:35:39.247171   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:39.247227   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:39.257308   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:39.257324   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:39.257329   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:39.268632   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:39.268646   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:39.293530   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:39.293541   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:39.331970   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:39.331981   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:39.348360   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:39.348373   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:39.367732   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:39.367743   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:39.383508   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:39.383519   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:39.397151   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:39.397164   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:39.409194   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:39.409205   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:39.421316   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:39.421328   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:39.435425   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:39.435435   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:39.447449   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:39.447459   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:39.464999   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:39.465009   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:39.476700   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:39.476712   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:39.509803   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:39.509817   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:42.016191   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:47.018553   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:47.018745   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:47.038584   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:47.038680   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:47.052645   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:47.052715   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:47.065195   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:47.065276   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:47.076712   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:47.076789   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:47.087809   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:47.087875   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:47.098247   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:47.098305   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:47.108584   16800 logs.go:276] 0 containers: []
	W0520 04:35:47.108597   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:47.108650   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:47.120030   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:47.120049   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:47.120055   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:47.154147   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:47.154157   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:47.165847   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:47.165856   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:47.178118   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:47.178128   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:47.191657   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:47.191669   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:47.203112   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:47.203124   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:47.217741   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:47.217751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:47.235332   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:47.235343   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:47.246518   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:47.246532   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:47.258150   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:47.258164   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:47.262354   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:47.262362   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:47.301835   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:47.301847   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:47.316340   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:47.316350   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:47.328872   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:47.328886   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:47.340937   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:47.340947   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:49.868190   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:54.870762   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:54.870902   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:54.886840   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:54.886910   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:54.897258   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:54.897328   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:54.908408   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:54.908483   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:54.919261   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:54.919323   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:54.929524   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:54.929591   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:54.941314   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:54.941381   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:54.952164   16800 logs.go:276] 0 containers: []
	W0520 04:35:54.953727   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:54.953790   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:54.966662   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:54.966681   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:54.966687   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:54.980728   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:54.980738   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:55.047912   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:55.047925   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:55.061301   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:55.061314   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:55.072630   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:55.072642   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:55.084885   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:55.084897   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:55.102613   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:55.102624   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:55.125896   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:55.125903   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:55.137292   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:55.137306   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:55.141784   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:55.141794   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:55.158259   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:55.158270   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:55.170288   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:55.170299   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:55.181984   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:55.181994   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:55.193467   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:55.193478   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:55.225502   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:55.225509   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:57.738840   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:02.741094   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:02.741244   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:02.756280   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:02.756345   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:02.768353   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:02.768430   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:02.778921   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:02.778983   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:02.789508   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:02.789577   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:02.802130   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:02.802201   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:02.813242   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:02.813307   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:02.823000   16800 logs.go:276] 0 containers: []
	W0520 04:36:02.823012   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:02.823070   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:02.833351   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:02.833371   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:02.833377   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:02.847552   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:02.847565   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:02.858869   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:02.858879   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:02.870513   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:02.870523   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:02.881852   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:02.881864   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:02.917928   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:02.917940   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:02.932268   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:02.932279   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:02.956550   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:02.956561   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:02.973838   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:02.973851   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:02.991825   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:02.991837   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:03.016301   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:03.016307   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:03.049685   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:03.049693   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:03.061550   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:03.061563   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:03.074907   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:03.074918   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:03.080194   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:03.080200   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:05.594201   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:10.596074   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:10.596196   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:10.607724   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:10.607808   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:10.619802   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:10.619876   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:10.631085   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:10.631151   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:10.642016   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:10.642103   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:10.653643   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:10.653725   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:10.664639   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:10.664709   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:10.676852   16800 logs.go:276] 0 containers: []
	W0520 04:36:10.676863   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:10.676925   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:10.688186   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:10.688203   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:10.688208   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:10.711362   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:10.711370   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:10.722961   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:10.722972   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:10.755469   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:10.755482   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:10.760330   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:10.760337   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:10.773454   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:10.773469   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:10.789597   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:10.789607   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:10.801049   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:10.801060   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:10.817627   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:10.817637   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:10.829357   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:10.829367   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:10.867008   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:10.867023   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:10.880601   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:10.880612   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:10.892590   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:10.892599   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:10.904367   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:10.904378   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:10.918680   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:10.918690   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:13.438141   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:18.439883   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:18.440089   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:18.454563   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:18.454651   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:18.467111   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:18.467182   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:18.482052   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:18.482130   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:18.492195   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:18.492267   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:18.512452   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:18.512527   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:18.523823   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:18.523900   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:18.534619   16800 logs.go:276] 0 containers: []
	W0520 04:36:18.534632   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:18.534685   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:18.545084   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:18.545103   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:18.545110   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:18.549643   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:18.549652   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:18.563762   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:18.563772   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:18.578736   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:18.578750   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:18.611265   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:18.611276   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:18.622708   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:18.622718   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:18.637390   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:18.637400   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:18.661179   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:18.661186   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:18.696950   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:18.696961   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:18.711273   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:18.711288   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:18.722932   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:18.722942   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:18.737371   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:18.737380   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:18.749238   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:18.749248   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:18.766870   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:18.766880   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:18.778670   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:18.778683   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:21.292436   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:26.294392   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:26.294493   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:26.309049   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:26.309121   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:26.328750   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:26.328831   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:26.345544   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:26.345626   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:26.358731   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:26.358794   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:26.373063   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:26.373133   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:26.384374   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:26.384441   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:26.394251   16800 logs.go:276] 0 containers: []
	W0520 04:36:26.394263   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:26.394320   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:26.404842   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:26.404860   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:26.404865   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:26.409916   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:26.409922   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:26.426298   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:26.426309   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:26.438383   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:26.438392   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:26.450007   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:26.450018   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:26.465318   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:26.465332   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:26.477298   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:26.477311   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:26.495223   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:26.495232   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:26.531626   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:26.531639   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:26.547818   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:26.547830   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:26.563207   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:26.563223   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:26.575610   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:26.575622   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:26.609383   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:26.609398   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:26.620630   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:26.620644   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:26.632908   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:26.632918   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:29.157982   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:34.160242   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:34.160546   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:34.196000   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:34.196134   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:34.215880   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:34.215979   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:34.230207   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:34.230286   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:34.242356   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:34.242428   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:34.255154   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:34.255229   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:34.266085   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:34.266158   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:34.276942   16800 logs.go:276] 0 containers: []
	W0520 04:36:34.276953   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:34.277014   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:34.287660   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:34.287678   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:34.287683   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:34.301551   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:34.301562   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:34.324206   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:34.324220   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:34.339782   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:34.339796   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:34.351503   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:34.351514   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:34.375220   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:34.375228   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:34.408763   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:34.408770   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:34.445114   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:34.445124   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:34.460048   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:34.460062   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:34.471949   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:34.471959   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:34.489760   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:34.489770   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:34.501974   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:34.501988   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:34.506564   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:34.506571   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:34.520504   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:34.520516   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:34.531646   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:34.531660   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:37.045454   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:42.047709   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:42.047876   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:42.067499   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:42.067592   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:42.081221   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:42.081292   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:42.093182   16800 logs.go:276] 4 containers: [c8137d3e26c2 7b48bfdfe496 98f0fbb43f9f 99862b875156]
	I0520 04:36:42.093253   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:42.107664   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:42.107728   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:42.119836   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:42.119895   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:42.130342   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:42.130399   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:42.141016   16800 logs.go:276] 0 containers: []
	W0520 04:36:42.141028   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:42.141083   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:42.153663   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:42.153684   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:42.153690   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:42.165394   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:42.165405   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:42.170544   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:42.170551   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:42.184682   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:42.184692   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:42.196091   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:42.196103   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:42.210360   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:42.210372   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:42.222719   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:42.222734   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:42.234609   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:42.234619   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:42.268844   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:42.268855   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:42.307784   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:42.307796   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:42.322091   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:42.322103   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:42.345156   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:42.345165   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:42.359576   16800 logs.go:123] Gathering logs for coredns [c8137d3e26c2] ...
	I0520 04:36:42.359589   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8137d3e26c2"
	I0520 04:36:42.377107   16800 logs.go:123] Gathering logs for coredns [7b48bfdfe496] ...
	I0520 04:36:42.377121   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b48bfdfe496"
	I0520 04:36:42.389117   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:42.389132   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:44.909419   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:49.912133   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:49.918368   16800 out.go:177] 
	W0520 04:36:49.922378   16800 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 04:36:49.922406   16800 out.go:239] * 
	* 
	W0520 04:36:49.924271   16800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:36:49.929269   16800 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-901000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-20 04:36:50.040051 -0700 PDT m=+1297.551889917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-901000 -n running-upgrade-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-901000 -n running-upgrade-901000: exit status 2 (15.686455209s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-901000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-401000          | force-systemd-flag-401000 | jenkins | v1.33.1 | 20 May 24 04:26 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-790000              | force-systemd-env-790000  | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-790000           | force-systemd-env-790000  | jenkins | v1.33.1 | 20 May 24 04:27 PDT | 20 May 24 04:27 PDT |
	| start   | -p docker-flags-248000                | docker-flags-248000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-401000             | force-systemd-flag-401000 | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-401000          | force-systemd-flag-401000 | jenkins | v1.33.1 | 20 May 24 04:27 PDT | 20 May 24 04:27 PDT |
	| start   | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-248000 ssh               | docker-flags-248000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-248000 ssh               | docker-flags-248000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-248000                | docker-flags-248000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT | 20 May 24 04:27 PDT |
	| start   | -p cert-options-214000                | cert-options-214000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-214000 ssh               | cert-options-214000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-214000 -- sudo        | cert-options-214000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-214000                | cert-options-214000       | jenkins | v1.33.1 | 20 May 24 04:27 PDT | 20 May 24 04:27 PDT |
	| start   | -p running-upgrade-901000             | minikube                  | jenkins | v1.26.0 | 20 May 24 04:27 PDT | 20 May 24 04:28 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-901000             | running-upgrade-901000    | jenkins | v1.33.1 | 20 May 24 04:28 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.1 | 20 May 24 04:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-169000             | cert-expiration-169000    | jenkins | v1.33.1 | 20 May 24 04:30 PDT | 20 May 24 04:30 PDT |
	| start   | -p kubernetes-upgrade-815000          | kubernetes-upgrade-815000 | jenkins | v1.33.1 | 20 May 24 04:30 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-815000          | kubernetes-upgrade-815000 | jenkins | v1.33.1 | 20 May 24 04:30 PDT | 20 May 24 04:30 PDT |
	| start   | -p kubernetes-upgrade-815000          | kubernetes-upgrade-815000 | jenkins | v1.33.1 | 20 May 24 04:30 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-815000          | kubernetes-upgrade-815000 | jenkins | v1.33.1 | 20 May 24 04:30 PDT | 20 May 24 04:30 PDT |
	| start   | -p stopped-upgrade-484000             | minikube                  | jenkins | v1.26.0 | 20 May 24 04:30 PDT | 20 May 24 04:31 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-484000 stop           | minikube                  | jenkins | v1.26.0 | 20 May 24 04:31 PDT | 20 May 24 04:31 PDT |
	| start   | -p stopped-upgrade-484000             | stopped-upgrade-484000    | jenkins | v1.33.1 | 20 May 24 04:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:31:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:31:34.979580   16966 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:31:34.979752   16966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:31:34.979756   16966 out.go:304] Setting ErrFile to fd 2...
	I0520 04:31:34.979758   16966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:31:34.979920   16966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:31:34.981092   16966 out.go:298] Setting JSON to false
	I0520 04:31:35.000555   16966 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9065,"bootTime":1716195629,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:31:35.000623   16966 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:31:35.006007   16966 out.go:177] * [stopped-upgrade-484000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:31:35.012989   16966 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:31:35.016962   16966 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:31:35.013061   16966 notify.go:220] Checking for updates...
	I0520 04:31:35.022949   16966 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:31:35.026047   16966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:31:35.029008   16966 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:31:35.031945   16966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:31:35.035327   16966 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:31:35.036883   16966 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 04:31:35.040048   16966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:31:35.043999   16966 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:31:35.050970   16966 start.go:297] selected driver: qemu2
	I0520 04:31:35.050978   16966 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:31:35.051060   16966 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:31:35.053706   16966 cni.go:84] Creating CNI manager for ""
	I0520 04:31:35.053724   16966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:31:35.053752   16966 start.go:340] cluster config:
	{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:31:35.053811   16966 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:31:35.060873   16966 out.go:177] * Starting "stopped-upgrade-484000" primary control-plane node in "stopped-upgrade-484000" cluster
	I0520 04:31:35.064911   16966 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:31:35.064927   16966 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 04:31:35.064938   16966 cache.go:56] Caching tarball of preloaded images
	I0520 04:31:35.064989   16966 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:31:35.064995   16966 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 04:31:35.065055   16966 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0520 04:31:35.065461   16966 start.go:360] acquireMachinesLock for stopped-upgrade-484000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:31:35.065496   16966 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "stopped-upgrade-484000"
	I0520 04:31:35.065507   16966 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:31:35.065513   16966 fix.go:54] fixHost starting: 
	I0520 04:31:35.065630   16966 fix.go:112] recreateIfNeeded on stopped-upgrade-484000: state=Stopped err=<nil>
	W0520 04:31:35.065638   16966 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:31:35.073936   16966 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-484000" ...
	I0520 04:31:38.127184   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:38.127295   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:38.139481   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:38.139541   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:38.156108   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:38.156185   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:38.167588   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:38.167660   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:38.179186   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:38.179253   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:38.189529   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:38.189592   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:38.200094   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:38.200162   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:38.210243   16800 logs.go:276] 0 containers: []
	W0520 04:31:38.210253   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:38.210310   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:38.220167   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:38.220187   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:38.220192   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:38.235619   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:38.235628   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:38.250525   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:38.250537   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:38.261601   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:38.261613   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:38.273580   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:38.273591   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:38.308652   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:38.308662   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:38.321917   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:38.321927   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:38.333456   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:38.333467   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:38.348099   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:38.348113   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:38.370597   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:38.370607   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:38.382407   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:38.382419   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:38.419654   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:38.419662   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:38.443805   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:38.443812   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:38.448303   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:38.448312   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:38.464910   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:38.464921   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:38.509440   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:38.509453   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:38.523507   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:38.523520   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:35.078095   16966 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53162-:22,hostfwd=tcp::53163-:2376,hostname=stopped-upgrade-484000 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/disk.qcow2
	I0520 04:31:35.125165   16966 main.go:141] libmachine: STDOUT: 
	I0520 04:31:35.125191   16966 main.go:141] libmachine: STDERR: 
	I0520 04:31:35.125196   16966 main.go:141] libmachine: Waiting for VM to start (ssh -p 53162 docker@127.0.0.1)...
	I0520 04:31:41.034957   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:46.035568   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:46.035664   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:46.046521   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:46.046598   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:46.057474   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:46.057547   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:46.067711   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:46.067778   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:46.078606   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:46.078680   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:46.093300   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:46.093369   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:46.104230   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:46.104299   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:46.119874   16800 logs.go:276] 0 containers: []
	W0520 04:31:46.119886   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:46.119950   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:46.141414   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:46.141432   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:46.141438   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:46.156294   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:46.156304   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:46.167653   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:46.167665   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:46.182917   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:46.182928   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:46.196779   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:46.196789   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:46.212320   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:46.212331   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:46.226186   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:46.226204   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:46.246127   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:46.246148   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:46.288721   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:46.288739   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:46.305322   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:46.305337   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:46.345022   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:46.345041   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:46.360251   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:46.360266   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:46.386749   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:46.386770   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:46.392483   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:46.392500   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:46.406564   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:46.406577   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:46.420521   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:46.420534   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:46.433826   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:46.433840   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:48.976677   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:53.979024   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:31:53.979551   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:31:54.019340   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:31:54.019478   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:31:54.042146   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:31:54.042242   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:31:54.057194   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:31:54.057284   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:31:54.069519   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:31:54.069592   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:31:54.080321   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:31:54.080399   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:31:54.095725   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:31:54.095818   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:31:54.107189   16800 logs.go:276] 0 containers: []
	W0520 04:31:54.107201   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:31:54.107266   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:31:54.117915   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:31:54.117931   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:31:54.117936   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:31:54.129302   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:31:54.129317   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:31:54.145694   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:31:54.145704   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:31:54.165195   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:31:54.165204   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:31:54.179687   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:31:54.179696   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:31:54.203480   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:31:54.203487   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:31:54.242093   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:31:54.242107   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:31:54.278865   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:31:54.278880   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:31:54.297592   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:31:54.297606   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:31:54.308232   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:31:54.308241   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:31:54.345591   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:31:54.345602   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:31:54.362050   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:31:54.362058   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:31:54.374189   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:31:54.374199   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:31:54.378826   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:31:54.378833   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:31:54.393215   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:31:54.393228   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:31:54.407919   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:31:54.407929   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:31:54.419553   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:31:54.419562   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:31:55.284637   16966 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0520 04:31:55.285452   16966 machine.go:94] provisionDockerMachine start ...
	I0520 04:31:55.285695   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.286304   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.286329   16966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:31:55.372642   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:31:55.372675   16966 buildroot.go:166] provisioning hostname "stopped-upgrade-484000"
	I0520 04:31:55.372822   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.373174   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.373185   16966 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-484000 && echo "stopped-upgrade-484000" | sudo tee /etc/hostname
	I0520 04:31:55.446939   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-484000
	
	I0520 04:31:55.447003   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.447153   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.447164   16966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-484000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-484000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-484000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:31:55.514228   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:31:55.514241   16966 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18932-14402/.minikube CaCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18932-14402/.minikube}
	I0520 04:31:55.514259   16966 buildroot.go:174] setting up certificates
	I0520 04:31:55.514265   16966 provision.go:84] configureAuth start
	I0520 04:31:55.514274   16966 provision.go:143] copyHostCerts
	I0520 04:31:55.514359   16966 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem, removing ...
	I0520 04:31:55.514367   16966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem
	I0520 04:31:55.514508   16966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem (1078 bytes)
	I0520 04:31:55.514738   16966 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem, removing ...
	I0520 04:31:55.514742   16966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem
	I0520 04:31:55.514811   16966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem (1123 bytes)
	I0520 04:31:55.514949   16966 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem, removing ...
	I0520 04:31:55.514954   16966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem
	I0520 04:31:55.515019   16966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem (1679 bytes)
	I0520 04:31:55.515143   16966 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-484000 san=[127.0.0.1 localhost minikube stopped-upgrade-484000]
	I0520 04:31:55.593875   16966 provision.go:177] copyRemoteCerts
	I0520 04:31:55.593927   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:31:55.593940   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:31:55.625584   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 04:31:55.632248   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 04:31:55.638879   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 04:31:55.646369   16966 provision.go:87] duration metric: took 132.100084ms to configureAuth
	I0520 04:31:55.646379   16966 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:31:55.646499   16966 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:31:55.646533   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.646620   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.646627   16966 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:31:55.707828   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:31:55.707836   16966 buildroot.go:70] root file system type: tmpfs
	I0520 04:31:55.707890   16966 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:31:55.707938   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.708039   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.708074   16966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:31:55.771097   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:31:55.771145   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.771241   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.771249   16966 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:31:56.136506   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:31:56.136522   16966 machine.go:97] duration metric: took 851.070375ms to provisionDockerMachine
	I0520 04:31:56.136531   16966 start.go:293] postStartSetup for "stopped-upgrade-484000" (driver="qemu2")
	I0520 04:31:56.136538   16966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:31:56.136628   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:31:56.136643   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:31:56.168326   16966 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:31:56.169753   16966 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 04:31:56.169762   16966 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18932-14402/.minikube/addons for local assets ...
	I0520 04:31:56.169858   16966 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18932-14402/.minikube/files for local assets ...
	I0520 04:31:56.169995   16966 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem -> 148952.pem in /etc/ssl/certs
	I0520 04:31:56.170126   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:31:56.174656   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem --> /etc/ssl/certs/148952.pem (1708 bytes)
	I0520 04:31:56.183154   16966 start.go:296] duration metric: took 46.615708ms for postStartSetup
	I0520 04:31:56.183175   16966 fix.go:56] duration metric: took 21.11791725s for fixHost
	I0520 04:31:56.183222   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:56.183338   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:56.183342   16966 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 04:31:56.246653   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716204716.018939379
	
	I0520 04:31:56.246662   16966 fix.go:216] guest clock: 1716204716.018939379
	I0520 04:31:56.246667   16966 fix.go:229] Guest: 2024-05-20 04:31:56.018939379 -0700 PDT Remote: 2024-05-20 04:31:56.183177 -0700 PDT m=+21.233222792 (delta=-164.237621ms)
	I0520 04:31:56.246678   16966 fix.go:200] guest clock delta is within tolerance: -164.237621ms
	I0520 04:31:56.246680   16966 start.go:83] releasing machines lock for "stopped-upgrade-484000", held for 21.181434666s
	I0520 04:31:56.246752   16966 ssh_runner.go:195] Run: cat /version.json
	I0520 04:31:56.246758   16966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:31:56.246761   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:31:56.246780   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	W0520 04:31:56.279363   16966 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 04:31:56.279417   16966 ssh_runner.go:195] Run: systemctl --version
	I0520 04:31:56.437131   16966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 04:31:56.439872   16966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:31:56.439914   16966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 04:31:56.444210   16966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 04:31:56.450888   16966 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:31:56.450898   16966 start.go:494] detecting cgroup driver to use...
	I0520 04:31:56.450990   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:31:56.459115   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 04:31:56.462967   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:31:56.466522   16966 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:31:56.466546   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:31:56.470055   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:31:56.473320   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:31:56.476023   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:31:56.478913   16966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:31:56.482333   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:31:56.485521   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:31:56.488275   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:31:56.491094   16966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:31:56.494336   16966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:31:56.497178   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:56.563678   16966 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:31:56.570637   16966 start.go:494] detecting cgroup driver to use...
	I0520 04:31:56.570722   16966 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:31:56.576387   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:31:56.588367   16966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:31:56.595126   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:31:56.600043   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:31:56.604536   16966 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:31:56.670719   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:31:56.676211   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:31:56.681639   16966 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:31:56.682822   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:31:56.685697   16966 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:31:56.690400   16966 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:31:56.770535   16966 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:31:56.846927   16966 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:31:56.846996   16966 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:31:56.852560   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:56.934946   16966 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:31:58.109051   16966 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.174103917s)
	I0520 04:31:58.109104   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:31:58.113812   16966 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 04:31:58.120071   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:31:58.124836   16966 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:31:58.193671   16966 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:31:58.265036   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:58.340911   16966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:31:58.346330   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:31:58.351004   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:58.429946   16966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:31:58.469480   16966 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:31:58.469559   16966 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:31:58.471466   16966 start.go:562] Will wait 60s for crictl version
	I0520 04:31:58.471495   16966 ssh_runner.go:195] Run: which crictl
	I0520 04:31:58.472679   16966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:31:58.487739   16966 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 04:31:58.487812   16966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:31:58.504837   16966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:31:56.933063   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:31:58.524725   16966 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 04:31:58.524804   16966 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 04:31:58.526303   16966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:31:58.530355   16966 kubeadm.go:877] updating cluster {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 04:31:58.530407   16966 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:31:58.530452   16966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:31:58.541544   16966 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:31:58.541569   16966 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:31:58.541624   16966 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:31:58.544824   16966 ssh_runner.go:195] Run: which lz4
	I0520 04:31:58.546356   16966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 04:31:58.547775   16966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 04:31:58.547787   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 04:31:59.210684   16966 docker.go:649] duration metric: took 664.364167ms to copy over tarball
	I0520 04:31:59.210752   16966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:32:01.935241   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:01.935354   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:01.946540   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:01.946615   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:01.957444   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:01.957516   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:01.967661   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:01.967729   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:01.978192   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:01.978260   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:01.989024   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:01.989093   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:02.006452   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:02.006515   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:02.016464   16800 logs.go:276] 0 containers: []
	W0520 04:32:02.016479   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:02.016538   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:02.027412   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:02.027430   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:02.027435   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:02.041133   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:02.041147   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:02.056545   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:02.056555   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:02.074734   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:02.074745   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:02.091019   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:02.091030   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:02.095898   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:02.095905   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:02.107878   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:02.107890   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:02.123042   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:02.123056   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:02.159714   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:02.159726   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:02.173872   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:02.173883   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:02.209924   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:02.209948   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:02.225882   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:02.225895   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:02.251564   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:02.251584   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:02.263471   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:02.263482   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:02.299288   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:02.299301   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:02.314962   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:02.314977   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:02.326958   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:02.326971   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:04.848973   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:00.376945   16966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.166190917s)
	I0520 04:32:00.376963   16966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:32:00.392753   16966 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:32:00.395579   16966 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 04:32:00.400622   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:32:00.482479   16966 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:32:02.687565   16966 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.205091167s)
	I0520 04:32:02.687660   16966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:32:02.700863   16966 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:32:02.700871   16966 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:32:02.700881   16966 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 04:32:02.714959   16966 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:02.716002   16966 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:02.716097   16966 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:32:02.716128   16966 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:02.716158   16966 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:02.716281   16966 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:02.716313   16966 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:02.716361   16966 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:02.726295   16966 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:02.726339   16966 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:02.726363   16966 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:02.726391   16966 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:32:02.726429   16966 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:02.726473   16966 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:02.726591   16966 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:02.726502   16966 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.345035   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:03.357857   16966 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 04:32:03.357883   16966 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:03.357935   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:03.367760   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:03.368588   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 04:32:03.379564   16966 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 04:32:03.379582   16966 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:03.379627   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:03.386418   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 04:32:03.388357   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:03.392755   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 04:32:03.398975   16966 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 04:32:03.399001   16966 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 04:32:03.399056   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 04:32:03.404704   16966 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 04:32:03.404732   16966 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:03.404793   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:03.412792   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:32:03.412915   16966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	W0520 04:32:03.416845   16966 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:32:03.416972   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:03.418384   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 04:32:03.418394   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 04:32:03.418407   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 04:32:03.427331   16966 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 04:32:03.427344   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 04:32:03.434656   16966 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 04:32:03.434677   16966 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:03.434731   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:03.447083   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:03.469490   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 04:32:03.469537   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 04:32:03.469552   16966 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 04:32:03.469569   16966 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:03.469616   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:03.469640   16966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:32:03.471088   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 04:32:03.471101   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 04:32:03.474865   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.486826   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 04:32:03.497067   16966 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 04:32:03.497094   16966 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.497166   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.517279   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:32:03.517398   16966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:32:03.524655   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 04:32:03.524689   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 04:32:03.526790   16966 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:32:03.526799   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 04:32:03.607956   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 04:32:03.723811   16966 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:32:03.723825   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0520 04:32:03.873177   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0520 04:32:03.901844   16966 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:32:03.901953   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:03.912817   16966 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 04:32:03.912850   16966 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:03.912907   16966 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:03.933891   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:32:03.934013   16966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:32:03.935359   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 04:32:03.935368   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 04:32:03.967631   16966 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:32:03.967642   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 04:32:04.199568   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 04:32:04.199607   16966 cache_images.go:92] duration metric: took 1.498738417s to LoadCachedImages
	W0520 04:32:04.199646   16966 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0520 04:32:04.199655   16966 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 04:32:04.199725   16966 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-484000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:32:04.199788   16966 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:32:04.219810   16966 cni.go:84] Creating CNI manager for ""
	I0520 04:32:04.219823   16966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:32:04.219830   16966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:32:04.219837   16966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-484000 NodeName:stopped-upgrade-484000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:32:04.219897   16966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-484000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:32:04.219945   16966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 04:32:04.223175   16966 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:32:04.223201   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 04:32:04.226383   16966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 04:32:04.231360   16966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:32:04.236275   16966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 04:32:04.241404   16966 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 04:32:04.242517   16966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:32:04.246181   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:32:04.326219   16966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:32:04.332603   16966 certs.go:68] Setting up /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000 for IP: 10.0.2.15
	I0520 04:32:04.332613   16966 certs.go:194] generating shared ca certs ...
	I0520 04:32:04.332622   16966 certs.go:226] acquiring lock for ca certs: {Name:mk68bd2733d4beefbc93944c03f6a3a33405f849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.332811   16966 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.key
	I0520 04:32:04.333584   16966 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.key
	I0520 04:32:04.333591   16966 certs.go:256] generating profile certs ...
	I0520 04:32:04.333814   16966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.key
	I0520 04:32:04.333834   16966 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968
	I0520 04:32:04.333847   16966 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 04:32:04.416053   16966 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 ...
	I0520 04:32:04.416069   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968: {Name:mkbabd86edee89dc28de2080d193c5ddccc74e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.416396   16966 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 ...
	I0520 04:32:04.416402   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968: {Name:mkd86c0394a3353f0a09a4031d227860b5b7c472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.418133   16966 certs.go:381] copying /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt
	I0520 04:32:04.418334   16966 certs.go:385] copying /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key
	I0520 04:32:04.418622   16966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/proxy-client.key
	I0520 04:32:04.418794   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895.pem (1338 bytes)
	W0520 04:32:04.418973   16966 certs.go:480] ignoring /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895_empty.pem, impossibly tiny 0 bytes
	I0520 04:32:04.418980   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 04:32:04.419001   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem (1078 bytes)
	I0520 04:32:04.419021   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem (1123 bytes)
	I0520 04:32:04.419040   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem (1679 bytes)
	I0520 04:32:04.419096   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem (1708 bytes)
	I0520 04:32:04.419462   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:32:04.426448   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 04:32:04.433845   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:32:04.440861   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 04:32:04.447375   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 04:32:04.454049   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 04:32:04.462634   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:32:04.469972   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 04:32:04.476673   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem --> /usr/share/ca-certificates/148952.pem (1708 bytes)
	I0520 04:32:04.482976   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:32:04.490017   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895.pem --> /usr/share/ca-certificates/14895.pem (1338 bytes)
	I0520 04:32:04.496555   16966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:32:04.501192   16966 ssh_runner.go:195] Run: openssl version
	I0520 04:32:04.502999   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148952.pem && ln -fs /usr/share/ca-certificates/148952.pem /etc/ssl/certs/148952.pem"
	I0520 04:32:04.506198   16966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148952.pem
	I0520 04:32:04.507693   16966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 11:16 /usr/share/ca-certificates/148952.pem
	I0520 04:32:04.507713   16966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148952.pem
	I0520 04:32:04.509701   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148952.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:32:04.512467   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:32:04.515266   16966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:32:04.516620   16966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:32:04.516641   16966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:32:04.518396   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:32:04.521576   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14895.pem && ln -fs /usr/share/ca-certificates/14895.pem /etc/ssl/certs/14895.pem"
	I0520 04:32:04.524439   16966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14895.pem
	I0520 04:32:04.525715   16966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 11:16 /usr/share/ca-certificates/14895.pem
	I0520 04:32:04.525735   16966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14895.pem
	I0520 04:32:04.527400   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14895.pem /etc/ssl/certs/51391683.0"
	I0520 04:32:04.530599   16966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:32:04.532502   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 04:32:04.534598   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 04:32:04.536504   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 04:32:04.538404   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 04:32:04.540124   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 04:32:04.541716   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 04:32:04.543476   16966 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:32:04.543547   16966 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:32:04.553184   16966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 04:32:04.556204   16966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 04:32:04.556211   16966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 04:32:04.556214   16966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 04:32:04.556234   16966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 04:32:04.558970   16966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:32:04.559259   16966 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-484000" does not appear in /Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:32:04.559353   16966 kubeconfig.go:62] /Users/jenkins/minikube-integration/18932-14402/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-484000" cluster setting kubeconfig missing "stopped-upgrade-484000" context setting]
	I0520 04:32:04.559572   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/kubeconfig: {Name:mk5af4624218472b4409997d6f105a56e728f278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.560030   16966 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1059a0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:32:04.560508   16966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 04:32:04.563098   16966 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-484000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 04:32:04.563104   16966 kubeadm.go:1154] stopping kube-system containers ...
	I0520 04:32:04.563139   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:32:04.578076   16966 docker.go:483] Stopping containers: [d3ea00f44c5d 003767b25f73 d34dd3433fb6 b050fc43c844 4e13cbe1f144 65fe0618401e 5fa2cd2b9667 c4002ed29331]
	I0520 04:32:04.578137   16966 ssh_runner.go:195] Run: docker stop d3ea00f44c5d 003767b25f73 d34dd3433fb6 b050fc43c844 4e13cbe1f144 65fe0618401e 5fa2cd2b9667 c4002ed29331
	I0520 04:32:04.588956   16966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 04:32:04.594445   16966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:32:04.597503   16966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:32:04.597512   16966 kubeadm.go:156] found existing configuration files:
	
	I0520 04:32:04.597533   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf
	I0520 04:32:04.599888   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:32:04.599911   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:32:04.602622   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf
	I0520 04:32:04.605576   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:32:04.605595   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:32:04.607898   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf
	I0520 04:32:04.610535   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:32:04.610555   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:32:04.613459   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf
	I0520 04:32:04.615691   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:32:04.615711   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:32:04.618494   16966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:32:04.621612   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:04.643045   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:09.851171   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:09.851362   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:09.872784   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:09.872889   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:09.895702   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:09.895772   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:09.910601   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:09.910661   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:09.921166   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:09.921236   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:09.931859   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:09.931935   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:09.942284   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:09.942346   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:05.196806   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:05.336931   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:05.359508   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:05.382457   16966 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:32:05.382542   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:05.884715   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:06.384600   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:06.388960   16966 api_server.go:72] duration metric: took 1.006517125s to wait for apiserver process to appear ...
	I0520 04:32:06.388968   16966 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:32:06.388977   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:09.952771   16800 logs.go:276] 0 containers: []
	W0520 04:32:09.957150   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:09.957263   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:09.967594   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:09.967614   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:09.967620   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:09.978921   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:09.978932   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:09.991214   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:09.991225   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:10.028825   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:10.028834   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:10.069322   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:10.069332   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:10.080265   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:10.080275   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:10.093357   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:10.093368   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:10.105499   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:10.105509   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:10.140398   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:10.140408   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:10.155425   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:10.155436   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:10.172674   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:10.172684   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:10.189968   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:10.189980   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:10.205077   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:10.205087   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:10.216802   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:10.216810   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:10.239165   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:10.239173   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:10.243683   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:10.243688   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:10.261266   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:10.261277   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:12.777609   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:11.391091   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:11.391236   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:17.779957   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:17.780099   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:17.798404   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:17.798485   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:17.809563   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:17.809629   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:17.819738   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:17.819799   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:17.830407   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:17.830474   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:17.840958   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:17.841022   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:17.851095   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:17.851154   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:17.861822   16800 logs.go:276] 0 containers: []
	W0520 04:32:17.861833   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:17.861888   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:17.872690   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:17.872709   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:17.872715   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:17.910552   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:17.910560   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:17.915410   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:17.915417   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:17.949844   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:17.949855   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:17.965308   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:17.965318   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:17.983398   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:17.983410   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:17.998553   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:17.998564   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:18.014292   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:18.014304   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:18.031695   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:18.031710   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:18.048932   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:18.048942   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:18.060752   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:18.060763   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:18.082779   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:18.082785   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:18.096493   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:18.096504   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:18.132844   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:18.132853   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:18.151394   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:18.151405   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:18.165777   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:18.165787   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:18.177140   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:18.177151   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:16.391766   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:16.391854   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:20.688697   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:21.392473   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:21.392521   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:25.691336   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:25.691447   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:25.702568   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:25.702637   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:25.712998   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:25.713073   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:25.723202   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:25.723270   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:25.734104   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:25.734173   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:25.744624   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:25.744684   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:25.755030   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:25.755096   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:25.765194   16800 logs.go:276] 0 containers: []
	W0520 04:32:25.765205   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:25.765265   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:25.775524   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:25.775547   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:25.775554   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:25.810869   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:25.810879   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:25.824896   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:25.824908   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:25.835872   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:25.835881   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:25.848949   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:25.848960   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:25.863264   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:25.863275   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:25.876165   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:25.876175   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:25.912141   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:25.912148   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:25.934665   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:25.934674   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:25.970423   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:25.970432   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:25.982342   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:25.982353   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:25.998932   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:25.998945   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:26.017297   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:26.017307   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:26.028904   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:26.028914   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:26.033240   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:26.033250   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:26.052322   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:26.052332   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:26.067840   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:26.067853   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:28.592941   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:26.393239   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:26.393287   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:33.595101   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:33.595257   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:32:33.609945   16800 logs.go:276] 2 containers: [9b6e607a97d2 4fdcae446f05]
	I0520 04:32:33.610033   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:32:33.622383   16800 logs.go:276] 2 containers: [596c94f40b2c 5fa1a21b1c90]
	I0520 04:32:33.622452   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:32:33.632838   16800 logs.go:276] 1 containers: [c320ded00ecb]
	I0520 04:32:33.632910   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:32:33.643901   16800 logs.go:276] 2 containers: [67fdfa2d631b b9cceac11f37]
	I0520 04:32:33.643978   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:32:33.654345   16800 logs.go:276] 1 containers: [8a2f30b4d601]
	I0520 04:32:33.654415   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:32:33.665025   16800 logs.go:276] 2 containers: [b95cc7c67920 2891142f3cee]
	I0520 04:32:33.665093   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:32:33.674780   16800 logs.go:276] 0 containers: []
	W0520 04:32:33.674790   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:32:33.674849   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:32:33.685214   16800 logs.go:276] 2 containers: [c3f61e74fe92 9da8357d1993]
	I0520 04:32:33.685232   16800 logs.go:123] Gathering logs for kube-controller-manager [2891142f3cee] ...
	I0520 04:32:33.685237   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2891142f3cee"
	I0520 04:32:33.699740   16800 logs.go:123] Gathering logs for storage-provisioner [c3f61e74fe92] ...
	I0520 04:32:33.699752   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3f61e74fe92"
	I0520 04:32:33.714270   16800 logs.go:123] Gathering logs for storage-provisioner [9da8357d1993] ...
	I0520 04:32:33.714279   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9da8357d1993"
	I0520 04:32:33.725821   16800 logs.go:123] Gathering logs for kube-apiserver [9b6e607a97d2] ...
	I0520 04:32:33.725833   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b6e607a97d2"
	I0520 04:32:33.739600   16800 logs.go:123] Gathering logs for etcd [596c94f40b2c] ...
	I0520 04:32:33.739609   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 596c94f40b2c"
	I0520 04:32:33.753464   16800 logs.go:123] Gathering logs for kube-scheduler [b9cceac11f37] ...
	I0520 04:32:33.753473   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9cceac11f37"
	I0520 04:32:33.775782   16800 logs.go:123] Gathering logs for kube-proxy [8a2f30b4d601] ...
	I0520 04:32:33.775795   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a2f30b4d601"
	I0520 04:32:33.789135   16800 logs.go:123] Gathering logs for kube-controller-manager [b95cc7c67920] ...
	I0520 04:32:33.789150   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b95cc7c67920"
	I0520 04:32:33.809394   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:32:33.809406   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:32:33.844421   16800 logs.go:123] Gathering logs for kube-apiserver [4fdcae446f05] ...
	I0520 04:32:33.844435   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fdcae446f05"
	I0520 04:32:33.880325   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:32:33.880337   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:32:33.918378   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:32:33.918395   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:32:33.923237   16800 logs.go:123] Gathering logs for kube-scheduler [67fdfa2d631b] ...
	I0520 04:32:33.923244   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67fdfa2d631b"
	I0520 04:32:33.935736   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:32:33.935753   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:32:33.957680   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:32:33.957689   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:32:33.970058   16800 logs.go:123] Gathering logs for etcd [5fa1a21b1c90] ...
	I0520 04:32:33.970070   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fa1a21b1c90"
	I0520 04:32:33.987156   16800 logs.go:123] Gathering logs for coredns [c320ded00ecb] ...
	I0520 04:32:33.987166   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c320ded00ecb"
	I0520 04:32:31.394236   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:31.394307   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:36.500486   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:36.395520   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:36.395565   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:41.502764   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:41.502841   16800 kubeadm.go:591] duration metric: took 4m4.595805291s to restartPrimaryControlPlane
	W0520 04:32:41.502893   16800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 04:32:41.502909   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 04:32:42.522686   16800 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.019777375s)
	I0520 04:32:42.522747   16800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:32:42.527618   16800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:32:42.530313   16800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:32:42.533084   16800 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:32:42.533090   16800 kubeadm.go:156] found existing configuration files:
	
	I0520 04:32:42.533112   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/admin.conf
	I0520 04:32:42.535454   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:32:42.535472   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:32:42.538341   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/kubelet.conf
	I0520 04:32:42.541436   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:32:42.541462   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:32:42.544300   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/controller-manager.conf
	I0520 04:32:42.546761   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:32:42.546780   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:32:42.549898   16800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/scheduler.conf
	I0520 04:32:42.553236   16800 kubeadm.go:162] "https://control-plane.minikube.internal:53009" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53009 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:32:42.553262   16800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:32:42.555958   16800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:32:42.573036   16800 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 04:32:42.573067   16800 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:32:42.620487   16800 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:32:42.620553   16800 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:32:42.620620   16800 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:32:42.672456   16800 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:32:42.676490   16800 out.go:204]   - Generating certificates and keys ...
	I0520 04:32:42.676521   16800 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:32:42.676551   16800 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:32:42.676595   16800 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 04:32:42.676627   16800 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 04:32:42.676754   16800 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 04:32:42.676789   16800 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 04:32:42.676817   16800 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 04:32:42.676846   16800 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 04:32:42.676894   16800 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 04:32:42.676944   16800 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 04:32:42.676965   16800 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 04:32:42.676996   16800 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:32:42.715482   16800 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:32:42.882025   16800 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:32:42.919318   16800 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:32:43.029293   16800 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:32:43.060857   16800 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:32:43.061201   16800 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:32:43.061222   16800 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:32:43.147988   16800 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:32:43.152132   16800 out.go:204]   - Booting up control plane ...
	I0520 04:32:43.152179   16800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:32:43.152215   16800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:32:43.152315   16800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:32:43.152354   16800 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:32:43.152501   16800 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 04:32:41.397163   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:41.397217   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:47.658424   16800 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.506068 seconds
	I0520 04:32:47.658497   16800 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:32:47.662358   16800 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:32:48.188197   16800 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:32:48.188576   16800 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-901000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:32:48.690870   16800 kubeadm.go:309] [bootstrap-token] Using token: 942xcg.pk2a1901m1kg8gvx
	I0520 04:32:48.694862   16800 out.go:204]   - Configuring RBAC rules ...
	I0520 04:32:48.694920   16800 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:32:48.696939   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:32:48.702890   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:32:48.703800   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:32:48.704813   16800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:32:48.705686   16800 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:32:48.710453   16800 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:32:48.853003   16800 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:32:49.104357   16800 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:32:49.105069   16800 kubeadm.go:309] 
	I0520 04:32:49.105107   16800 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:32:49.105110   16800 kubeadm.go:309] 
	I0520 04:32:49.105152   16800 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:32:49.105158   16800 kubeadm.go:309] 
	I0520 04:32:49.105201   16800 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:32:49.105267   16800 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:32:49.105310   16800 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:32:49.105314   16800 kubeadm.go:309] 
	I0520 04:32:49.105345   16800 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:32:49.105348   16800 kubeadm.go:309] 
	I0520 04:32:49.105387   16800 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:32:49.105394   16800 kubeadm.go:309] 
	I0520 04:32:49.105439   16800 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:32:49.105483   16800 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:32:49.105530   16800 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:32:49.105534   16800 kubeadm.go:309] 
	I0520 04:32:49.105590   16800 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:32:49.105634   16800 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:32:49.105638   16800 kubeadm.go:309] 
	I0520 04:32:49.105683   16800 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 942xcg.pk2a1901m1kg8gvx \
	I0520 04:32:49.105770   16800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 \
	I0520 04:32:49.105787   16800 kubeadm.go:309] 	--control-plane 
	I0520 04:32:49.105793   16800 kubeadm.go:309] 
	I0520 04:32:49.105840   16800 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:32:49.105844   16800 kubeadm.go:309] 
	I0520 04:32:49.105888   16800 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 942xcg.pk2a1901m1kg8gvx \
	I0520 04:32:49.105958   16800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 
	I0520 04:32:49.106040   16800 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:32:49.106049   16800 cni.go:84] Creating CNI manager for ""
	I0520 04:32:49.106058   16800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:32:49.109820   16800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 04:32:49.118773   16800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 04:32:49.121798   16800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 04:32:49.126768   16800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:32:49.126844   16800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:32:49.126844   16800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-901000 minikube.k8s.io/updated_at=2024_05_20T04_32_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=running-upgrade-901000 minikube.k8s.io/primary=true
	I0520 04:32:49.174645   16800 kubeadm.go:1107] duration metric: took 47.825667ms to wait for elevateKubeSystemPrivileges
	I0520 04:32:49.174669   16800 ops.go:34] apiserver oom_adj: -16
	W0520 04:32:49.174712   16800 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:32:49.174717   16800 kubeadm.go:393] duration metric: took 4m12.281759292s to StartCluster
	I0520 04:32:49.174727   16800 settings.go:142] acquiring lock: {Name:mkfc25767ac77ec7e329af7eb025d278b3830db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:49.174873   16800 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:32:49.175229   16800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/kubeconfig: {Name:mk5af4624218472b4409997d6f105a56e728f278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:49.175419   16800 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:32:49.178778   16800 out.go:177] * Verifying Kubernetes components...
	I0520 04:32:49.175426   16800 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:32:49.175596   16800 config.go:182] Loaded profile config "running-upgrade-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:32:49.186857   16800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:32:49.186878   16800 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-901000"
	I0520 04:32:49.186889   16800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-901000"
	I0520 04:32:49.186878   16800 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-901000"
	I0520 04:32:49.186906   16800 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-901000"
	W0520 04:32:49.186910   16800 addons.go:243] addon storage-provisioner should already be in state true
	I0520 04:32:49.186922   16800 host.go:66] Checking if "running-upgrade-901000" exists ...
	I0520 04:32:49.188027   16800 kapi.go:59] client config for running-upgrade-901000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/running-upgrade-901000/client.key", CAFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105b28580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:32:49.188963   16800 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-901000"
	W0520 04:32:49.188968   16800 addons.go:243] addon default-storageclass should already be in state true
	I0520 04:32:49.188976   16800 host.go:66] Checking if "running-upgrade-901000" exists ...
	I0520 04:32:49.193779   16800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:49.197847   16800 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:32:49.197853   16800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:32:49.197859   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	I0520 04:32:49.198553   16800 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:32:49.198559   16800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:32:49.198563   16800 sshutil.go:53] new ssh client: &{IP:localhost Port:52977 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/running-upgrade-901000/id_rsa Username:docker}
	I0520 04:32:49.273936   16800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:32:49.280609   16800 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:32:49.280663   16800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:49.284665   16800 api_server.go:72] duration metric: took 109.2365ms to wait for apiserver process to appear ...
	I0520 04:32:49.284673   16800 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:32:49.284681   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:49.291632   16800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:32:49.294830   16800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:32:46.399196   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:46.399216   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:54.286847   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:54.286963   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:51.401392   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:51.401437   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:59.287512   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:59.287547   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:56.403582   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:56.403609   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:04.288023   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:04.288083   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:01.405749   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:01.405780   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:09.288889   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:09.288948   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:06.407896   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:06.408126   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:06.420035   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:06.420112   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:06.430666   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:06.430739   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:06.441240   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:06.441305   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:06.454035   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:06.454099   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:06.464572   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:06.464635   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:06.475504   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:06.475576   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:06.485436   16966 logs.go:276] 0 containers: []
	W0520 04:33:06.485447   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:06.485513   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:06.501030   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:06.501059   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:06.501065   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:06.515600   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:06.515610   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:06.526875   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:06.526885   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:06.538000   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:06.538337   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:06.554252   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:06.554268   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:06.565785   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:06.565800   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:06.569862   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:06.569869   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:06.583545   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:06.583559   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:06.598252   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:06.598266   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:06.610033   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:06.610045   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:06.728647   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:06.728661   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:06.744735   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:06.744748   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:06.783424   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:06.783433   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:06.830620   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:06.830635   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:06.842203   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:06.842214   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:06.859701   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:06.859711   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:06.873978   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:06.873988   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:09.400609   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:14.290025   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:14.290081   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:14.402832   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:14.403023   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:14.418814   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:14.418916   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:14.431418   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:14.431490   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:14.442167   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:14.442259   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:14.452419   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:14.452502   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:14.462777   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:14.462853   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:14.473752   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:14.473832   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:14.486892   16966 logs.go:276] 0 containers: []
	W0520 04:33:14.486902   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:14.486955   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:14.497950   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:14.497968   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:14.497973   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:14.511933   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:14.511944   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:14.536255   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:14.536266   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:14.540233   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:14.540238   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:14.551803   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:14.551814   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:14.562742   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:14.562752   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:14.600412   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:14.600421   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:14.636640   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:14.636650   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:14.678033   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:14.678044   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:14.693586   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:14.693598   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:14.705287   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:14.705298   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:14.722883   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:14.722893   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:14.740951   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:14.740961   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:14.754677   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:14.754687   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:14.768326   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:14.768335   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:14.791164   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:14.791173   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:14.803274   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:14.803286   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:19.291373   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:19.291420   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 04:33:19.630002   16800 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 04:33:19.634407   16800 out.go:177] * Enabled addons: storage-provisioner
	I0520 04:33:19.642304   16800 addons.go:505] duration metric: took 30.467245625s for enable addons: enabled=[storage-provisioner]
	I0520 04:33:17.316634   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:24.293120   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:24.293173   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:22.317513   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:22.317811   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:22.339981   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:22.340082   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:22.356472   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:22.356561   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:22.369225   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:22.369296   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:22.380516   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:22.380599   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:22.390741   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:22.390815   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:22.401377   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:22.401445   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:22.411438   16966 logs.go:276] 0 containers: []
	W0520 04:33:22.411455   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:22.411511   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:22.422525   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:22.422543   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:22.422548   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:22.459243   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:22.459254   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:22.472717   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:22.472727   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:22.509041   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:22.509049   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:22.520064   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:22.520074   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:22.537361   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:22.537372   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:22.553295   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:22.553305   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:22.564771   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:22.564782   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:22.576192   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:22.576202   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:22.602327   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:22.602340   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:22.619186   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:22.619196   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:22.656708   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:22.656720   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:22.675596   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:22.675609   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:22.687145   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:22.687156   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:22.699388   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:22.699401   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:22.703568   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:22.703578   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:22.717930   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:22.717944   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:29.295231   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:29.295283   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:25.235494   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:34.297644   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:34.297695   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:30.237742   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:30.238084   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:30.279685   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:30.279788   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:30.297526   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:30.297602   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:30.310823   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:30.310893   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:30.322270   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:30.322338   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:30.333622   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:30.333685   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:30.344739   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:30.344809   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:30.355012   16966 logs.go:276] 0 containers: []
	W0520 04:33:30.355023   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:30.355076   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:30.365872   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:30.365891   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:30.365896   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:30.402236   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:30.402243   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:30.439977   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:30.439995   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:30.451918   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:30.451928   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:30.463621   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:30.463635   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:30.467767   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:30.467775   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:30.479375   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:30.479387   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:30.493412   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:30.493424   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:30.518375   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:30.518384   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:30.530400   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:30.530411   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:30.567888   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:30.567898   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:30.582280   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:30.582291   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:30.596700   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:30.596710   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:30.611619   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:30.611630   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:30.631794   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:30.631804   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:30.642542   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:30.642553   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:30.657105   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:30.657114   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:33.181976   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:39.299927   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:39.299953   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:38.184247   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:38.184480   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:38.208810   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:38.208924   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:38.224669   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:38.224746   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:38.237504   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:38.237578   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:38.249058   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:38.249130   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:38.259252   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:38.259315   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:38.270448   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:38.270519   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:38.281330   16966 logs.go:276] 0 containers: []
	W0520 04:33:38.281344   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:38.281405   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:38.294429   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:38.294448   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:38.294454   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:38.332999   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:38.333019   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:38.347572   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:38.347581   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:38.385851   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:38.385862   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:38.398019   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:38.398032   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:38.415340   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:38.415353   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:38.429152   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:38.429164   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:38.441226   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:38.441237   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:38.477579   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:38.477593   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:38.491367   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:38.491382   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:38.502397   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:38.502409   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:38.513290   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:38.513299   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:38.524102   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:38.524113   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:38.528306   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:38.528317   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:38.542735   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:38.542744   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:38.566401   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:38.566410   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:38.583923   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:38.583936   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:44.302184   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:44.302244   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:41.096389   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:49.303212   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:49.303397   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:49.340926   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:33:49.341016   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:49.355503   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:33:49.355572   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:49.369618   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:33:49.369686   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:49.385105   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:33:49.385162   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:49.395840   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:33:49.395905   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:49.406180   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:33:49.406245   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:49.415953   16800 logs.go:276] 0 containers: []
	W0520 04:33:49.415963   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:49.416017   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:49.426202   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:33:49.426220   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:49.426226   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:49.464691   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:33:49.464703   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:33:49.476623   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:33:49.476634   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:33:49.488301   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:33:49.488313   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:49.500189   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:49.500199   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:49.504817   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:33:49.504827   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:33:49.518899   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:33:49.518909   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:33:49.534977   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:33:49.534987   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:33:49.547218   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:33:49.547230   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:33:49.560785   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:33:49.560794   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:33:49.578288   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:33:49.578298   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:33:49.589416   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:49.589426   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:49.613847   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:49.613855   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:46.098847   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:46.099069   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:46.116553   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:46.116642   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:46.130102   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:46.130177   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:46.141947   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:46.142016   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:46.154035   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:46.154105   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:46.164590   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:46.164651   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:46.175039   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:46.175104   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:46.185246   16966 logs.go:276] 0 containers: []
	W0520 04:33:46.185257   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:46.185307   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:46.195712   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:46.195746   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:46.195752   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:46.207155   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:46.207165   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:46.222411   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:46.222422   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:46.233652   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:46.233662   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:46.270184   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:46.270197   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:46.284217   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:46.284226   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:46.305503   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:46.305513   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:46.317541   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:46.317552   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:46.329669   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:46.329680   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:46.368356   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:46.368373   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:46.383965   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:46.383974   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:46.401714   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:46.401728   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:46.413559   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:46.413571   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:46.438539   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:46.438547   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:46.449801   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:46.449811   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:46.454065   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:46.454072   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:46.487815   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:46.487826   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:49.004509   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:52.148203   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:54.006724   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:54.006891   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:54.026830   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:54.026925   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:54.042845   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:54.042914   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:54.055712   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:54.055793   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:54.069852   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:54.069918   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:54.080339   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:54.080413   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:54.091189   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:54.091249   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:54.100948   16966 logs.go:276] 0 containers: []
	W0520 04:33:54.100959   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:54.101009   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:54.111509   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:54.111530   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:54.111536   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:54.124229   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:54.124238   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:54.160562   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:54.160572   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:54.195935   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:54.195946   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:54.210063   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:54.210074   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:54.223773   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:54.223783   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:54.234697   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:54.234708   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:54.246863   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:54.246874   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:54.258393   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:54.258404   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:54.262840   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:54.262850   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:54.302910   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:54.302921   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:54.317614   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:54.317625   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:54.333512   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:54.333525   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:54.358366   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:54.358373   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:54.370115   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:54.370127   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:54.387518   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:54.387531   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:54.401204   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:54.401215   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:57.150528   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:57.150696   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:57.170266   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:33:57.170339   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:57.183342   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:33:57.183401   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:57.194640   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:33:57.194714   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:57.205142   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:33:57.205204   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:57.218158   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:33:57.218225   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:57.228581   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:33:57.228636   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:57.239641   16800 logs.go:276] 0 containers: []
	W0520 04:33:57.239650   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:57.239699   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:57.249849   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:33:57.249861   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:57.249866   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:57.284000   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:33:57.284008   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:33:57.296099   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:33:57.296109   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:33:57.314062   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:57.314072   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:57.337854   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:33:57.337863   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:57.348936   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:57.348947   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:57.353910   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:57.353920   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:57.389666   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:33:57.389676   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:33:57.404122   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:33:57.404134   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:33:57.422114   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:33:57.422126   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:33:57.433735   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:33:57.433746   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:33:57.445141   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:33:57.445153   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:33:57.459397   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:33:57.459406   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:33:56.916473   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:59.972873   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:01.918559   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:01.918944   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:01.954376   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:01.954515   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:01.976262   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:01.976363   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:01.990806   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:01.990889   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:02.003532   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:02.003602   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:02.016345   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:02.016420   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:02.033077   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:02.033148   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:02.043070   16966 logs.go:276] 0 containers: []
	W0520 04:34:02.043081   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:02.043131   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:02.054664   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:02.054683   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:02.054688   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:02.092160   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:02.092167   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:02.106109   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:02.106120   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:02.120257   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:02.120269   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:02.132766   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:02.132777   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:02.168768   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:02.168781   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:02.180746   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:02.180756   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:02.194433   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:02.194443   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:02.199178   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:02.199192   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:02.211344   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:02.211355   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:02.222949   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:02.222958   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:02.247885   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:02.247894   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:02.262160   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:02.262173   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:02.300145   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:02.300156   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:02.316269   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:02.316280   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:02.328363   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:02.328375   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:02.348667   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:02.348677   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:04.862378   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:04.975097   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:04.975301   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:04.995488   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:04.995576   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:05.010272   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:05.010343   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:05.025456   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:05.025530   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:05.036159   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:05.036230   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:05.046512   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:05.046585   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:05.056532   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:05.056595   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:05.066534   16800 logs.go:276] 0 containers: []
	W0520 04:34:05.066545   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:05.066603   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:05.077063   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:05.077078   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:05.077084   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:05.088833   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:05.088845   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:05.102739   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:05.102751   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:05.137192   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:05.137214   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:05.141743   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:05.141751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:05.153003   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:05.153013   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:05.169321   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:05.169331   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:05.181322   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:05.181334   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:05.195966   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:05.195976   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:05.213983   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:05.213993   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:05.237068   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:05.237075   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:05.273826   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:05.273837   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:05.289343   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:05.289354   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:07.806007   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:09.864723   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:09.864897   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:09.877810   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:09.877884   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:09.888334   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:09.888398   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:09.899203   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:09.899271   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:09.915379   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:09.915452   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:09.926039   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:09.926112   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:09.936959   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:09.937024   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:09.950110   16966 logs.go:276] 0 containers: []
	W0520 04:34:09.950121   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:09.950189   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:09.960661   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:09.960678   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:09.960683   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:09.975677   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:09.975688   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:12.807866   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:12.808164   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:12.842177   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:12.842310   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:12.861714   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:12.861809   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:12.876389   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:12.876464   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:12.889544   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:12.889625   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:12.905515   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:12.905589   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:12.916906   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:12.916980   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:12.931860   16800 logs.go:276] 0 containers: []
	W0520 04:34:12.931870   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:12.931930   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:12.943222   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:12.943240   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:12.943253   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:12.975724   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:12.975731   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:12.990705   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:12.990714   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:13.004955   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:13.004966   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:13.016717   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:13.016728   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:13.028819   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:13.028831   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:13.043590   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:13.043599   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:13.055073   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:13.055083   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:13.078369   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:13.078384   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:13.090308   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:13.090318   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:13.094689   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:13.094696   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:13.146617   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:13.146634   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:13.159604   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:13.159615   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:09.990011   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:09.990020   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:10.007280   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:10.007293   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:10.042291   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:10.042301   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:10.085119   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:10.085130   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:10.096421   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:10.096434   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:10.108655   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:10.108665   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:10.133147   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:10.133157   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:10.137212   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:10.137217   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:10.151409   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:10.151418   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:10.169114   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:10.169125   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:10.180466   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:10.180478   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:10.192077   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:10.192088   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:10.229179   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:10.229191   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:10.243173   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:10.243188   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:10.256750   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:10.256762   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:12.774413   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:15.680002   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:17.777151   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:17.777527   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:17.815716   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:17.815854   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:17.836034   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:17.836149   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:17.850826   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:17.850902   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:17.862997   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:17.863064   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:17.873572   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:17.873642   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:17.884196   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:17.884262   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:17.899164   16966 logs.go:276] 0 containers: []
	W0520 04:34:17.899175   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:17.899229   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:17.910068   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:17.910085   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:17.910090   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:17.922623   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:17.922634   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:17.939917   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:17.939927   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:17.976538   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:17.976545   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:17.980895   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:17.980904   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:17.994975   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:17.994987   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:18.014299   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:18.014310   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:18.034815   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:18.034825   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:18.049966   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:18.049976   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:18.062663   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:18.062672   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:18.082183   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:18.082194   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:18.094397   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:18.094408   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:18.136531   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:18.136541   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:18.173711   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:18.173721   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:18.197534   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:18.197541   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:18.211822   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:18.211831   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:18.226695   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:18.226705   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:20.682343   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:20.682777   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:20.712876   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:20.713000   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:20.731533   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:20.731624   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:20.746045   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:20.746096   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:20.758085   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:20.758157   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:20.768927   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:20.769002   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:20.779971   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:20.780028   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:20.792143   16800 logs.go:276] 0 containers: []
	W0520 04:34:20.792157   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:20.792206   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:20.802728   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:20.802746   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:20.802751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:20.821179   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:20.821190   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:20.833720   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:20.833731   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:20.845345   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:20.845359   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:20.879705   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:20.879713   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:20.893765   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:20.893775   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:20.907720   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:20.907730   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:20.919620   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:20.919632   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:20.931735   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:20.931745   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:20.951090   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:20.951101   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:20.967596   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:20.967605   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:20.990821   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:20.990830   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:20.995024   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:20.995032   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:23.533513   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:20.745572   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:28.535239   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:28.535393   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:28.546907   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:28.546989   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:28.557208   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:28.557271   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:28.567929   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:28.567994   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:28.578179   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:28.578258   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:28.588633   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:28.588706   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:28.599305   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:28.599372   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:28.617351   16800 logs.go:276] 0 containers: []
	W0520 04:34:28.617369   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:28.617433   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:28.634245   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:28.634260   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:28.634266   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:28.645881   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:28.645892   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:28.663813   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:28.663821   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:28.668550   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:28.668559   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:28.702791   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:28.702801   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:28.717093   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:28.717104   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:28.731657   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:28.731670   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:28.743157   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:28.743170   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:28.759106   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:28.759117   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:28.770581   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:28.770592   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:28.782156   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:28.782166   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:28.816871   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:28.816879   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:28.829112   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:28.829123   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:25.747759   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:25.747994   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:25.770460   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:25.770544   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:25.785185   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:25.785251   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:25.803106   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:25.803178   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:25.813803   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:25.813865   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:25.824167   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:25.824223   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:25.834348   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:25.834406   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:25.844233   16966 logs.go:276] 0 containers: []
	W0520 04:34:25.844246   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:25.844301   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:25.854912   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:25.854932   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:25.854936   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:25.891938   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:25.891951   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:25.903854   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:25.903863   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:25.916199   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:25.916210   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:25.927814   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:25.927826   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:25.952770   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:25.952778   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:25.990755   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:25.990767   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:26.005174   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:26.005184   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:26.016971   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:26.016982   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:26.035210   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:26.035219   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:26.048744   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:26.048757   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:26.053329   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:26.053335   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:26.067781   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:26.067791   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:26.082847   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:26.082856   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:26.096947   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:26.096962   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:26.108718   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:26.108729   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:26.120577   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:26.120587   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:28.658559   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:31.355561   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:33.660687   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:33.660830   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:33.683445   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:33.683513   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:33.694464   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:33.694534   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:33.705006   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:33.705078   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:33.715911   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:33.715980   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:33.730371   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:33.730434   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:33.740958   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:33.741023   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:33.751104   16966 logs.go:276] 0 containers: []
	W0520 04:34:33.751114   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:33.751169   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:33.761450   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:33.761466   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:33.761471   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:33.775996   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:33.776005   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:33.789809   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:33.789821   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:33.801799   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:33.801810   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:33.840028   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:33.840036   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:33.852181   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:33.852191   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:33.869215   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:33.869225   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:33.880723   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:33.880733   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:33.894800   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:33.894810   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:33.906233   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:33.906245   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:33.921139   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:33.921149   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:33.946987   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:33.946994   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:33.950889   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:33.950894   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:33.989025   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:33.989036   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:34.027449   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:34.027460   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:34.041101   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:34.041111   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:34.052554   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:34.052566   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:36.357734   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:36.357894   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:36.370158   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:36.370235   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:36.381184   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:36.381256   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:36.391751   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:36.391822   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:36.406506   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:36.406577   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:36.416554   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:36.416628   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:36.426780   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:36.426846   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:36.436954   16800 logs.go:276] 0 containers: []
	W0520 04:34:36.436964   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:36.437018   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:36.447337   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:36.447353   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:36.447357   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:36.458508   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:36.458521   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:36.481121   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:36.481130   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:36.485489   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:36.485496   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:36.519843   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:36.519857   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:36.534226   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:36.534242   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:36.557155   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:36.557167   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:36.576524   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:36.576534   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:36.594020   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:36.594033   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:36.605403   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:36.605414   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:36.639378   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:36.639386   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:36.653985   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:36.653995   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:36.665308   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:36.665319   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:39.179539   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:36.569423   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:44.180425   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:44.180653   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:44.209023   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:44.209128   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:44.225311   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:44.225394   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:44.236957   16800 logs.go:276] 2 containers: [3e28d2642e42 3964253b5a3a]
	I0520 04:34:44.237022   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:44.250712   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:44.250782   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:44.261386   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:44.261486   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:44.273022   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:44.273083   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:44.283237   16800 logs.go:276] 0 containers: []
	W0520 04:34:44.283245   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:44.283297   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:44.294003   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:44.294017   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:44.294022   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:44.305701   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:44.305711   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:44.310449   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:44.310456   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:44.346702   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:44.346717   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:44.358818   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:44.358828   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:44.370506   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:44.370519   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:44.386040   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:44.386054   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:44.403426   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:44.403439   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:44.428448   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:44.428456   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:44.462167   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:44.462175   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:44.476419   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:44.476432   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:44.490228   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:44.490239   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:44.507501   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:44.507511   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:41.570607   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:41.570822   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:41.588649   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:41.588738   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:41.603028   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:41.603104   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:41.613908   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:41.613971   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:41.624176   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:41.624239   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:41.638361   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:41.638431   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:41.649391   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:41.649451   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:41.663745   16966 logs.go:276] 0 containers: []
	W0520 04:34:41.663757   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:41.663817   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:41.674471   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:41.674490   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:41.674495   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:41.699141   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:41.699152   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:41.736302   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:41.736309   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:41.740805   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:41.740811   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:41.754705   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:41.754715   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:41.772196   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:41.772206   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:41.785985   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:41.786000   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:41.797533   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:41.797547   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:41.837908   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:41.837918   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:41.849488   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:41.849501   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:41.868044   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:41.868054   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:41.884331   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:41.884341   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:41.899684   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:41.899700   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:41.914657   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:41.914670   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:41.926738   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:41.926749   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:41.961259   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:41.961268   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:41.975490   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:41.975501   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:44.490307   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:47.020535   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:49.492453   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:49.492667   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:49.510625   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:49.510707   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:49.527390   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:49.527460   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:49.537798   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:49.537869   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:49.548082   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:49.548156   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:49.566275   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:49.566343   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:49.576639   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:49.576708   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:49.590685   16966 logs.go:276] 0 containers: []
	W0520 04:34:49.590697   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:49.590748   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:49.602352   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:49.602376   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:49.602382   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:49.607207   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:49.607219   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:49.645335   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:49.645347   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:49.659669   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:49.659684   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:49.696627   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:49.696638   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:49.714831   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:49.714844   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:49.726985   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:49.726998   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:49.765174   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:49.765182   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:49.776383   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:49.776394   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:49.787925   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:49.787936   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:49.805268   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:49.805281   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:49.819115   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:49.819129   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:49.833752   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:49.833761   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:49.848023   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:49.848032   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:49.859113   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:49.859123   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:49.874805   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:49.874817   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:49.886145   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:49.886163   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:52.022841   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:52.023061   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:52.046768   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:52.046858   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:52.061193   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:52.061278   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:52.073771   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:34:52.073846   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:52.084408   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:52.084478   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:52.095360   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:52.095426   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:52.105994   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:52.106070   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:52.116172   16800 logs.go:276] 0 containers: []
	W0520 04:34:52.116183   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:52.116239   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:52.126926   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:34:52.126946   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:52.126951   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:52.150927   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:52.150935   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:52.183170   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:52.183177   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:52.221474   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:34:52.221485   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:34:52.233076   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:34:52.233086   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:34:52.246548   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:34:52.246558   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:34:52.258081   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:34:52.258091   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:34:52.269630   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:34:52.269645   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:34:52.287150   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:52.287165   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:52.292717   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:34:52.292729   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:34:52.307356   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:34:52.307372   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:34:52.326511   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:34:52.326530   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:34:52.339174   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:34:52.339190   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:34:52.353893   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:34:52.353907   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:34:52.366378   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:34:52.366389   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:54.881520   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:52.410852   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:59.883932   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:59.884191   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:59.913099   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:34:59.913232   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:59.935041   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:34:59.935123   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:59.948094   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:34:59.948176   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:57.412984   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:57.413077   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:57.424550   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:57.424632   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:57.435495   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:57.435568   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:57.449980   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:57.450049   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:57.460855   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:57.460920   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:57.471899   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:57.471975   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:57.482404   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:57.482471   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:57.495445   16966 logs.go:276] 0 containers: []
	W0520 04:34:57.495457   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:57.495515   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:57.505503   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:57.505519   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:57.505524   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:57.541969   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:57.541977   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:57.552477   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:57.552487   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:57.564041   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:57.564055   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:57.576089   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:57.576104   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:57.588013   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:57.588024   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:57.623299   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:57.623310   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:57.637820   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:57.637830   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:57.653987   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:57.653998   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:57.666109   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:57.666120   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:57.680497   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:57.680510   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:57.698761   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:57.698772   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:57.703116   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:57.703123   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:57.745678   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:57.745689   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:57.763426   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:57.763437   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:57.778231   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:57.778241   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:57.795605   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:57.795615   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:59.959867   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:34:59.959930   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:59.969979   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:34:59.970041   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:59.980277   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:34:59.980344   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:59.990724   16800 logs.go:276] 0 containers: []
	W0520 04:34:59.990735   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:59.990791   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:00.001741   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:00.001757   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:00.001763   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:00.013719   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:00.013727   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:00.025708   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:00.025718   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:00.042668   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:00.042677   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:00.075089   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:00.075099   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:00.079311   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:00.079320   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:00.090582   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:00.090598   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:00.101854   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:00.101864   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:00.118535   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:00.118547   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:00.132536   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:00.132550   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:00.144668   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:00.144680   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:00.160160   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:00.160174   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:00.184513   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:00.184520   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:00.196222   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:00.196234   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:00.231595   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:00.231606   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:02.745390   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:00.320807   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:07.745764   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:07.746001   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:07.768929   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:07.769019   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:07.784171   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:07.784248   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:07.801148   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:07.801222   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:07.812238   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:07.812316   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:07.823259   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:07.823330   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:07.834682   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:07.834748   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:07.844913   16800 logs.go:276] 0 containers: []
	W0520 04:35:07.844924   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:07.844981   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:07.856180   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:07.856210   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:07.856215   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:07.890502   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:07.890520   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:07.901296   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:07.901307   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:07.925834   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:07.925841   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:07.929974   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:07.929983   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:07.947515   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:07.947531   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:07.959114   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:07.959124   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:07.970646   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:07.970656   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:07.985741   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:07.985751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:07.997460   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:07.997473   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:08.009106   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:08.009115   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:08.045967   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:08.045979   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:08.060935   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:08.060943   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:08.073265   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:08.073280   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:08.090551   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:08.090561   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:05.322434   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:05.322579   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:05.333392   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:05.333472   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:05.344279   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:05.344345   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:05.356962   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:05.357035   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:05.370411   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:05.370480   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:05.384764   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:05.384837   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:05.395656   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:05.395720   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:05.407724   16966 logs.go:276] 0 containers: []
	W0520 04:35:05.407737   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:05.407798   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:05.421929   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:05.421947   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:05.421953   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:05.426286   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:05.426292   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:05.463682   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:05.463693   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:05.478203   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:05.478213   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:05.494664   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:05.494673   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:05.509751   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:05.509761   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:05.524458   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:05.524468   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:05.539577   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:05.539586   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:05.550946   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:05.550955   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:05.573502   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:05.573509   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:05.610807   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:05.610815   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:05.645478   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:05.645488   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:05.657106   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:05.657116   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:05.670729   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:05.670739   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:05.683356   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:05.683366   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:05.698949   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:05.698960   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:05.710669   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:05.710680   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:08.231507   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:10.604365   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:13.233794   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:13.234013   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:13.257303   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:13.257396   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:13.273110   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:13.273192   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:13.294439   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:13.294509   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:13.305041   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:13.305113   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:13.315667   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:13.315741   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:13.327104   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:13.327177   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:13.337384   16966 logs.go:276] 0 containers: []
	W0520 04:35:13.337395   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:13.337451   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:13.348652   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:13.348671   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:13.348676   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:13.363944   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:13.363954   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:13.377580   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:13.377589   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:13.390455   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:13.390466   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:13.413095   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:13.413106   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:13.424547   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:13.424559   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:13.437161   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:13.437173   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:13.448674   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:13.448684   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:13.484678   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:13.484688   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:13.499290   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:13.499300   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:13.511409   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:13.511420   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:13.527557   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:13.527567   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:13.545587   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:13.545596   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:13.557152   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:13.557162   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:13.594142   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:13.594150   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:13.598155   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:13.598164   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:13.634740   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:13.634749   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:15.606667   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:15.606862   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:15.625780   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:15.625862   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:15.639474   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:15.639547   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:15.651104   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:15.651169   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:15.661506   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:15.661570   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:15.671820   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:15.671874   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:15.682148   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:15.682219   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:15.691864   16800 logs.go:276] 0 containers: []
	W0520 04:35:15.691873   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:15.691925   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:15.703618   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:15.703635   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:15.703640   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:15.715373   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:15.715386   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:15.729473   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:15.729486   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:15.746313   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:15.746322   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:15.757993   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:15.758002   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:15.763340   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:15.763348   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:15.799128   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:15.799140   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:15.823532   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:15.823547   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:15.837368   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:15.837376   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:15.855073   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:15.855084   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:15.867303   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:15.867315   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:15.879299   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:15.879307   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:15.890960   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:15.890969   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:15.915944   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:15.915952   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:15.949510   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:15.949517   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:18.463275   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:16.150095   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:23.465557   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:23.465662   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:23.478906   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:23.478992   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:23.490304   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:23.490375   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:23.500533   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:23.500604   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:23.511288   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:23.511353   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:23.521468   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:23.521539   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:23.532211   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:23.532280   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:23.542591   16800 logs.go:276] 0 containers: []
	W0520 04:35:23.542600   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:23.542653   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:23.552644   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:23.552664   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:23.552669   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:23.566774   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:23.566784   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:23.581592   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:23.581603   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:23.585798   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:23.585806   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:23.597565   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:23.597577   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:23.612289   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:23.612302   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:23.624262   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:23.624275   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:23.656613   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:23.656623   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:23.681589   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:23.681596   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:23.693174   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:23.693188   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:23.727589   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:23.727603   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:23.744614   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:23.744625   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:23.756094   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:23.756105   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:23.768076   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:23.768088   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:23.780094   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:23.780107   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:21.152354   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:21.152584   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:21.167481   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:21.167565   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:21.179330   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:21.179391   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:21.190962   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:21.191030   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:21.201696   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:21.201764   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:21.212384   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:21.212450   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:21.222456   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:21.222519   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:21.232045   16966 logs.go:276] 0 containers: []
	W0520 04:35:21.232058   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:21.232115   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:21.242641   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:21.242670   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:21.242675   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:21.256876   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:21.256887   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:21.268463   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:21.268477   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:21.280242   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:21.280255   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:21.284561   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:21.284570   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:21.301049   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:21.301064   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:21.318728   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:21.318738   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:21.341706   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:21.341714   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:21.361013   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:21.361022   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:21.372710   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:21.372719   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:21.407727   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:21.407737   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:21.422034   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:21.422044   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:21.433448   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:21.433463   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:21.445268   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:21.445278   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:21.481873   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:21.481890   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:21.520427   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:21.520438   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:21.534829   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:21.534838   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:24.053074   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:26.299491   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:29.055331   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:29.055496   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:29.071352   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:29.071442   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:29.084369   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:29.084445   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:29.095371   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:29.095441   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:29.106224   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:29.106304   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:29.117187   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:29.117256   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:29.127474   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:29.127551   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:29.140246   16966 logs.go:276] 0 containers: []
	W0520 04:35:29.140259   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:29.140320   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:29.151048   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:29.151069   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:29.151075   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:29.162911   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:29.162924   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:29.167649   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:29.167658   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:29.182147   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:29.182161   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:29.200635   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:29.200648   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:29.218189   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:29.218203   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:29.230499   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:29.230513   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:29.241896   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:29.241906   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:29.265507   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:29.265514   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:29.278887   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:29.278897   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:29.292021   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:29.292031   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:29.331109   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:29.331126   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:29.368221   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:29.368236   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:29.405752   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:29.405763   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:29.417911   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:29.417923   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:29.436967   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:29.436980   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:29.451663   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:29.451672   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:31.301864   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:31.301993   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:31.314417   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:31.314497   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:31.329343   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:31.329416   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:31.340237   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:31.340314   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:31.351161   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:31.351229   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:31.361995   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:31.362067   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:31.372643   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:31.372714   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:31.383117   16800 logs.go:276] 0 containers: []
	W0520 04:35:31.383131   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:31.383186   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:31.393671   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:31.393688   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:31.393693   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:31.426675   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:31.426691   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:31.431271   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:31.431277   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:31.466935   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:31.466948   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:31.484573   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:31.484583   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:31.508011   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:31.508022   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:31.522151   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:31.522160   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:31.533497   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:31.533507   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:31.551576   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:31.551589   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:31.564042   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:31.564053   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:31.577883   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:31.577893   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:31.592203   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:31.592214   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:31.609169   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:31.609181   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:31.620485   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:31.620496   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:31.631970   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:31.631981   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:34.144563   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:31.965716   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:39.147224   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:39.147526   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:39.169670   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:39.169771   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:39.185216   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:39.185300   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:39.201141   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:39.201213   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:39.215306   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:39.215372   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:39.226029   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:39.226098   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:39.236717   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:39.236786   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:39.247158   16800 logs.go:276] 0 containers: []
	W0520 04:35:39.247171   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:39.247227   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:39.257308   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:39.257324   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:39.257329   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:39.268632   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:39.268646   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:39.293530   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:39.293541   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:39.331970   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:39.331981   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:39.348360   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:39.348373   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:39.367732   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:39.367743   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:39.383508   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:39.383519   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:39.397151   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:39.397164   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:39.409194   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:39.409205   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:39.421316   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:39.421328   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:39.435425   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:39.435435   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:39.447449   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:39.447459   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:39.464999   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:39.465009   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:39.476700   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:39.476712   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:39.509803   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:39.509817   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:36.967680   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:36.967850   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:36.980767   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:36.980847   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:36.992103   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:36.992216   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:37.002378   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:37.002447   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:37.012620   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:37.012686   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:37.022767   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:37.022834   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:37.033375   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:37.033442   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:37.043817   16966 logs.go:276] 0 containers: []
	W0520 04:35:37.043829   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:37.043888   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:37.053836   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:37.053855   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:37.053860   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:37.076183   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:37.076190   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:37.089797   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:37.089807   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:37.104705   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:37.104715   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:37.116642   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:37.116654   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:37.137043   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:37.137055   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:37.152501   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:37.152514   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:37.163425   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:37.163439   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:37.197838   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:37.197851   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:37.202591   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:37.202600   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:37.214698   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:37.214713   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:37.232478   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:37.232487   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:37.244364   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:37.244375   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:37.280578   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:37.280586   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:37.298912   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:37.298926   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:37.310237   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:37.310247   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:37.324781   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:37.324791   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:39.865471   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:42.016191   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:44.867819   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:44.868153   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:44.905883   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:44.906024   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:44.926490   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:44.926586   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:44.943510   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:44.943592   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:44.955950   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:44.956024   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:44.966877   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:44.966945   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:47.018553   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:47.018745   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:47.038584   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:47.038680   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:47.052645   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:47.052715   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:47.065195   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:47.065276   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:47.076712   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:47.076789   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:47.087809   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:47.087875   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:47.098247   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:47.098305   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:47.108584   16800 logs.go:276] 0 containers: []
	W0520 04:35:47.108597   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:47.108650   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:47.120030   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:47.120049   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:47.120055   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:47.154147   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:47.154157   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:47.165847   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:47.165856   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:47.178118   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:47.178128   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:47.191657   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:47.191669   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:47.203112   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:47.203124   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:47.217741   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:47.217751   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:47.235332   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:47.235343   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:47.246518   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:47.246532   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:47.258150   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:47.258164   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:47.262354   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:47.262362   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:47.301835   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:47.301847   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:47.316340   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:47.316350   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:47.328872   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:47.328886   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:47.340937   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:47.340947   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:49.868190   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:44.978783   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:44.982239   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:44.993033   16966 logs.go:276] 0 containers: []
	W0520 04:35:44.993044   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:44.993095   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:45.003809   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:45.003827   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:45.003832   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:45.042545   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:45.042554   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:45.055139   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:45.055151   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:45.072558   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:45.072568   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:45.108415   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:45.108427   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:45.123572   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:45.123582   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:45.162506   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:45.162517   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:45.177433   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:45.177442   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:45.192315   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:45.192324   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:45.206181   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:45.206191   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:45.218197   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:45.218208   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:45.229763   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:45.229773   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:45.234285   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:45.234291   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:45.246050   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:45.246063   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:45.264490   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:45.264502   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:45.287599   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:45.287611   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:45.299277   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:45.299287   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:47.812870   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:54.870762   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:54.870902   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:54.886840   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:35:54.886910   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:54.897258   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:35:54.897328   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:54.908408   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:35:54.908483   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:54.919261   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:35:54.919323   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:54.929524   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:35:54.929591   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:54.941314   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:35:54.941381   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:52.815158   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:52.815312   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:52.825850   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:52.825921   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:52.836237   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:52.836305   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:52.846413   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:52.846471   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:52.857148   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:52.857217   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:52.875609   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:52.875674   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:52.885892   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:52.885955   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:52.896283   16966 logs.go:276] 0 containers: []
	W0520 04:35:52.896296   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:52.896346   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:52.910045   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:52.910064   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:52.910069   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:52.921924   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:52.921934   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:52.933653   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:52.933664   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:52.946032   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:52.946043   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:52.950732   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:52.950741   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:52.968611   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:52.968621   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:52.979911   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:52.979921   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:52.991109   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:52.991119   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:53.024168   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:53.024181   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:53.038033   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:53.038042   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:53.053506   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:53.053518   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:53.091334   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:53.091344   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:53.132623   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:53.132642   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:53.148253   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:53.148268   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:53.162218   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:53.162227   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:53.184537   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:53.184545   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:53.197932   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:53.197942   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:54.952164   16800 logs.go:276] 0 containers: []
	W0520 04:35:54.953727   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:54.953790   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:54.966662   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:35:54.966681   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:35:54.966687   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:35:54.980728   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:54.980738   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:55.047912   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:35:55.047925   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:35:55.061301   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:35:55.061314   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:35:55.072630   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:35:55.072642   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:35:55.084885   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:35:55.084897   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:35:55.102613   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:55.102624   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:55.125896   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:35:55.125903   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:55.137292   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:55.137306   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:55.141784   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:35:55.141794   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:35:55.158259   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:35:55.158270   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:35:55.170288   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:35:55.170299   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:35:55.181984   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:35:55.181994   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:35:55.193467   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:55.193478   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:55.225502   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:35:55.225509   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:35:57.738840   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:55.714122   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:02.741094   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:02.741244   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:02.756280   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:02.756345   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:02.768353   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:02.768430   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:02.778921   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:02.778983   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:02.789508   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:02.789577   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:02.802130   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:02.802201   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:02.813242   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:02.813307   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:02.823000   16800 logs.go:276] 0 containers: []
	W0520 04:36:02.823012   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:02.823070   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:02.833351   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:02.833371   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:02.833377   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:02.847552   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:02.847565   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:02.858869   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:02.858879   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:02.870513   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:02.870523   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:02.881852   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:02.881864   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:02.917928   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:02.917940   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:02.932268   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:02.932279   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:02.956550   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:02.956561   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:02.973838   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:02.973851   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:02.991825   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:02.991837   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:03.016301   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:03.016307   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:03.049685   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:03.049693   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:03.061550   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:03.061563   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:03.074907   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:03.074918   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:03.080194   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:03.080200   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:00.714437   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:00.714594   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:00.727584   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:36:00.727665   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:00.742945   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:36:00.743018   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:00.753727   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:36:00.753793   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:00.764119   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:36:00.764185   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:00.778194   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:36:00.778258   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:00.793472   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:36:00.793540   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:00.803977   16966 logs.go:276] 0 containers: []
	W0520 04:36:00.803989   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:00.804045   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:00.814759   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:36:00.814777   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:36:00.814782   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:36:00.826510   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:36:00.826521   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:36:00.841888   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:36:00.841901   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:36:00.861953   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:36:00.861966   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:36:00.873683   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:00.873693   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:00.896215   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:00.896221   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:00.900583   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:00.900591   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:00.936969   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:36:00.936983   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:36:00.952814   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:36:00.952824   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:36:00.964946   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:36:00.964956   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:36:00.985240   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:36:00.985250   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:36:01.006666   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:36:01.006675   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:36:01.046409   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:36:01.046419   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:36:01.063297   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:01.063309   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:01.100786   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:36:01.100795   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:36:01.113311   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:36:01.113321   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:36:01.130878   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:36:01.130889   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:03.645027   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:08.647213   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:08.647317   16966 kubeadm.go:591] duration metric: took 4m4.094137667s to restartPrimaryControlPlane
	W0520 04:36:08.647415   16966 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 04:36:08.647453   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 04:36:09.732011   16966 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.084688417s)
	I0520 04:36:09.732078   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:36:09.737256   16966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:36:09.740196   16966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:36:09.743153   16966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:36:09.743160   16966 kubeadm.go:156] found existing configuration files:
	
	I0520 04:36:09.743183   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf
	I0520 04:36:09.745683   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:36:09.745705   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:36:09.748440   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf
	I0520 04:36:09.751588   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:36:09.751611   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:36:09.754263   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf
	I0520 04:36:09.756664   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:36:09.756687   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:36:09.759815   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf
	I0520 04:36:09.762667   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:36:09.762687   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:36:09.765404   16966 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:36:09.782442   16966 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 04:36:09.782476   16966 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:36:09.833059   16966 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:36:09.833123   16966 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:36:09.833187   16966 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:36:09.889956   16966 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:36:05.594201   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:09.899172   16966 out.go:204]   - Generating certificates and keys ...
	I0520 04:36:09.899206   16966 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:36:09.899238   16966 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:36:09.899277   16966 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 04:36:09.899310   16966 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 04:36:09.899351   16966 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 04:36:09.899385   16966 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 04:36:09.899419   16966 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 04:36:09.899454   16966 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 04:36:09.899487   16966 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 04:36:09.899527   16966 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 04:36:09.899544   16966 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 04:36:09.899577   16966 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:36:09.959701   16966 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:36:10.090773   16966 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:36:10.155744   16966 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:36:10.288105   16966 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:36:10.319031   16966 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:36:10.319575   16966 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:36:10.319703   16966 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:36:10.396449   16966 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:36:10.596074   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:10.596196   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:10.607724   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:10.607808   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:10.619802   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:10.619876   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:10.631085   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:10.631151   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:10.642016   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:10.642103   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:10.653643   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:10.653725   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:10.664639   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:10.664709   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:10.676852   16800 logs.go:276] 0 containers: []
	W0520 04:36:10.676863   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:10.676925   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:10.688186   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:10.688203   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:10.688208   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:10.711362   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:10.711370   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:10.722961   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:10.722972   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:10.755469   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:10.755482   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:10.760330   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:10.760337   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:10.773454   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:10.773469   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:10.789597   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:10.789607   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:10.801049   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:10.801060   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:10.817627   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:10.817637   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:10.829357   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:10.829367   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:10.867008   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:10.867023   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:10.880601   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:10.880612   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:10.892590   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:10.892599   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:10.904367   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:10.904378   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:10.918680   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:10.918690   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:13.438141   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:10.400188   16966 out.go:204]   - Booting up control plane ...
	I0520 04:36:10.400231   16966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:36:10.400266   16966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:36:10.400302   16966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:36:10.401285   16966 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:36:10.401368   16966 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 04:36:14.905322   16966 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504088 seconds
	I0520 04:36:14.905425   16966 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:36:14.911673   16966 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:36:15.419966   16966 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:36:15.420101   16966 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-484000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:36:15.930638   16966 kubeadm.go:309] [bootstrap-token] Using token: ew4xpo.mbfk0gq3vr62cx5o
	I0520 04:36:15.937237   16966 out.go:204]   - Configuring RBAC rules ...
	I0520 04:36:15.937344   16966 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:36:15.937433   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:36:15.940412   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:36:15.944044   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:36:15.945673   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:36:15.947226   16966 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:36:15.955603   16966 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:36:16.152673   16966 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:36:16.335836   16966 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:36:16.336217   16966 kubeadm.go:309] 
	I0520 04:36:16.336247   16966 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:36:16.336252   16966 kubeadm.go:309] 
	I0520 04:36:16.336312   16966 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:36:16.336317   16966 kubeadm.go:309] 
	I0520 04:36:16.336328   16966 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:36:16.336354   16966 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:36:16.336471   16966 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:36:16.336475   16966 kubeadm.go:309] 
	I0520 04:36:16.336501   16966 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:36:16.336507   16966 kubeadm.go:309] 
	I0520 04:36:16.336538   16966 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:36:16.336543   16966 kubeadm.go:309] 
	I0520 04:36:16.336603   16966 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:36:16.336646   16966 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:36:16.336687   16966 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:36:16.336692   16966 kubeadm.go:309] 
	I0520 04:36:16.336750   16966 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:36:16.336789   16966 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:36:16.336800   16966 kubeadm.go:309] 
	I0520 04:36:16.336841   16966 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ew4xpo.mbfk0gq3vr62cx5o \
	I0520 04:36:16.336892   16966 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 \
	I0520 04:36:16.336908   16966 kubeadm.go:309] 	--control-plane 
	I0520 04:36:16.336911   16966 kubeadm.go:309] 
	I0520 04:36:16.336970   16966 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:36:16.336979   16966 kubeadm.go:309] 
	I0520 04:36:16.337017   16966 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ew4xpo.mbfk0gq3vr62cx5o \
	I0520 04:36:16.337073   16966 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 
	I0520 04:36:16.337137   16966 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:36:16.337189   16966 cni.go:84] Creating CNI manager for ""
	I0520 04:36:16.337198   16966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:36:16.340654   16966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 04:36:16.343683   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 04:36:16.346923   16966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 04:36:16.352026   16966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:36:16.352091   16966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:36:16.352105   16966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-484000 minikube.k8s.io/updated_at=2024_05_20T04_36_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=stopped-upgrade-484000 minikube.k8s.io/primary=true
	I0520 04:36:16.355388   16966 ops.go:34] apiserver oom_adj: -16
	I0520 04:36:16.393831   16966 kubeadm.go:1107] duration metric: took 41.776667ms to wait for elevateKubeSystemPrivileges
	W0520 04:36:16.393855   16966 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:36:16.393858   16966 kubeadm.go:393] duration metric: took 4m11.854282959s to StartCluster
	I0520 04:36:16.393867   16966 settings.go:142] acquiring lock: {Name:mkfc25767ac77ec7e329af7eb025d278b3830db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:36:16.393953   16966 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:36:16.394369   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/kubeconfig: {Name:mk5af4624218472b4409997d6f105a56e728f278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:36:16.394577   16966 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:36:16.398627   16966 out.go:177] * Verifying Kubernetes components...
	I0520 04:36:16.394585   16966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:36:16.394661   16966 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:36:16.406496   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:36:16.406505   16966 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-484000"
	I0520 04:36:16.406517   16966 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-484000"
	W0520 04:36:16.406521   16966 addons.go:243] addon storage-provisioner should already be in state true
	I0520 04:36:16.406529   16966 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-484000"
	I0520 04:36:16.406533   16966 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0520 04:36:16.406539   16966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-484000"
	I0520 04:36:16.407040   16966 retry.go:31] will retry after 1.205385154s: connect: dial unix /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/monitor: connect: connection refused
	I0520 04:36:16.411575   16966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:36:18.439883   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:18.440089   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:18.454563   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:18.454651   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:18.467111   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:18.467182   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:18.482052   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:18.482130   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:18.492195   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:18.492267   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:18.512452   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:18.512527   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:18.523823   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:18.523900   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:18.534619   16800 logs.go:276] 0 containers: []
	W0520 04:36:18.534632   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:18.534685   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:18.545084   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:18.545103   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:18.545110   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:18.549643   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:18.549652   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:18.563762   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:18.563772   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:18.578736   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:18.578750   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:18.611265   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:18.611276   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:18.622708   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:18.622718   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:18.637390   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:18.637400   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:18.661179   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:18.661186   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:18.696950   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:18.696961   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:18.711273   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:18.711288   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:18.722932   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:18.722942   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:18.737371   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:18.737380   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:18.749238   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:18.749248   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:18.766870   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:18.766880   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:18.778670   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:18.778683   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:16.415704   16966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:36:16.415711   16966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:36:16.415718   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:36:16.500799   16966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:36:16.506051   16966 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:36:16.506092   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:36:16.509786   16966 api_server.go:72] duration metric: took 115.209ms to wait for apiserver process to appear ...
	I0520 04:36:16.509794   16966 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:36:16.509800   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:16.548935   16966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:36:17.615500   16966 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1059a0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:36:17.615645   16966 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-484000"
	W0520 04:36:17.615654   16966 addons.go:243] addon default-storageclass should already be in state true
	I0520 04:36:17.615668   16966 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0520 04:36:17.616500   16966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:36:17.616507   16966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:36:17.616513   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:36:17.649971   16966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:36:21.292436   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:21.510814   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:21.510885   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:26.294392   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:26.294493   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:26.309049   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:26.309121   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:26.328750   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:26.328831   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:26.345544   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:26.345626   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:26.358731   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:26.358794   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:26.373063   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:26.373133   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:26.384374   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:26.384441   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:26.394251   16800 logs.go:276] 0 containers: []
	W0520 04:36:26.394263   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:26.394320   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:26.404842   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:26.404860   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:26.404865   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:26.409916   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:26.409922   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:26.426298   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:26.426309   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:26.438383   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:26.438392   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:26.450007   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:26.450018   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:26.465318   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:26.465332   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:26.477298   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:26.477311   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:26.495223   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:26.495232   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:26.531626   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:26.531639   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:26.547818   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:26.547830   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:26.563207   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:26.563223   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:26.575610   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:26.575622   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:26.609383   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:26.609398   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:26.620630   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:26.620644   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:26.632908   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:26.632918   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:29.157982   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:26.511184   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:26.511206   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:34.160242   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:34.160546   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:34.196000   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:34.196134   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:34.215880   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:34.215979   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:34.230207   16800 logs.go:276] 4 containers: [98f0fbb43f9f 99862b875156 3e28d2642e42 3964253b5a3a]
	I0520 04:36:34.230286   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:34.242356   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:34.242428   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:34.255154   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:34.255229   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:34.266085   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:34.266158   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:34.276942   16800 logs.go:276] 0 containers: []
	W0520 04:36:34.276953   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:34.277014   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:34.287660   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:34.287678   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:34.287683   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:34.301551   16800 logs.go:123] Gathering logs for coredns [3e28d2642e42] ...
	I0520 04:36:34.301562   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e28d2642e42"
	I0520 04:36:34.324206   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:34.324220   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:34.339782   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:34.339796   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:34.351503   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:34.351514   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:34.375220   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:34.375228   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:34.408763   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:34.408770   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:34.445114   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:34.445124   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:34.460048   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:34.460062   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:34.471949   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:34.471959   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:34.489760   16800 logs.go:123] Gathering logs for coredns [3964253b5a3a] ...
	I0520 04:36:34.489770   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3964253b5a3a"
	I0520 04:36:34.501974   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:34.501988   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:34.506564   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:34.506571   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:34.520504   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:34.520516   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:34.531646   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:34.531660   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:31.511167   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:31.511216   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:37.045454   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:36.511565   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:36.511618   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:42.047709   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:42.047876   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:42.067499   16800 logs.go:276] 1 containers: [ce527c47d156]
	I0520 04:36:42.067592   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:42.081221   16800 logs.go:276] 1 containers: [65d56dff2269]
	I0520 04:36:42.081292   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:42.093182   16800 logs.go:276] 4 containers: [c8137d3e26c2 7b48bfdfe496 98f0fbb43f9f 99862b875156]
	I0520 04:36:42.093253   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:42.107664   16800 logs.go:276] 1 containers: [9c4a27e0ad15]
	I0520 04:36:42.107728   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:42.119836   16800 logs.go:276] 1 containers: [ff32a3db6bb6]
	I0520 04:36:42.119895   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:42.130342   16800 logs.go:276] 1 containers: [f957f8a7d085]
	I0520 04:36:42.130399   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:42.141016   16800 logs.go:276] 0 containers: []
	W0520 04:36:42.141028   16800 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:42.141083   16800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:42.153663   16800 logs.go:276] 1 containers: [4c5ff16f80c8]
	I0520 04:36:42.153684   16800 logs.go:123] Gathering logs for container status ...
	I0520 04:36:42.153690   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:42.165394   16800 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:42.165405   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:42.170544   16800 logs.go:123] Gathering logs for etcd [65d56dff2269] ...
	I0520 04:36:42.170551   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65d56dff2269"
	I0520 04:36:42.184682   16800 logs.go:123] Gathering logs for coredns [98f0fbb43f9f] ...
	I0520 04:36:42.184692   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98f0fbb43f9f"
	I0520 04:36:42.196091   16800 logs.go:123] Gathering logs for kube-scheduler [9c4a27e0ad15] ...
	I0520 04:36:42.196103   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4a27e0ad15"
	I0520 04:36:42.210360   16800 logs.go:123] Gathering logs for kube-proxy [ff32a3db6bb6] ...
	I0520 04:36:42.210372   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff32a3db6bb6"
	I0520 04:36:42.222719   16800 logs.go:123] Gathering logs for coredns [99862b875156] ...
	I0520 04:36:42.222734   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99862b875156"
	I0520 04:36:42.234609   16800 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:42.234619   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:42.268844   16800 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:42.268855   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:42.307784   16800 logs.go:123] Gathering logs for storage-provisioner [4c5ff16f80c8] ...
	I0520 04:36:42.307796   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c5ff16f80c8"
	I0520 04:36:42.322091   16800 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:42.322103   16800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:42.345156   16800 logs.go:123] Gathering logs for kube-apiserver [ce527c47d156] ...
	I0520 04:36:42.345165   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce527c47d156"
	I0520 04:36:42.359576   16800 logs.go:123] Gathering logs for coredns [c8137d3e26c2] ...
	I0520 04:36:42.359589   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8137d3e26c2"
	I0520 04:36:42.377107   16800 logs.go:123] Gathering logs for coredns [7b48bfdfe496] ...
	I0520 04:36:42.377121   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b48bfdfe496"
	I0520 04:36:42.389117   16800 logs.go:123] Gathering logs for kube-controller-manager [f957f8a7d085] ...
	I0520 04:36:42.389132   16800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f957f8a7d085"
	I0520 04:36:44.909419   16800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:41.512247   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:41.512288   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:46.512842   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:46.512884   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 04:36:47.758364   16966 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 04:36:47.761517   16966 out.go:177] * Enabled addons: storage-provisioner
	I0520 04:36:49.912133   16800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:49.918368   16800 out.go:177] 
	W0520 04:36:49.922378   16800 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 04:36:49.922406   16800 out.go:239] * 
	W0520 04:36:49.924271   16800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:36:49.929269   16800 out.go:177] 
	I0520 04:36:47.772351   16966 addons.go:505] duration metric: took 31.379177291s for enable addons: enabled=[storage-provisioner]
	I0520 04:36:51.513644   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:51.513686   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:56.514649   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:56.514671   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:01.515858   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:01.515883   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-05-20 11:27:48 UTC, ends at Mon 2024-05-20 11:37:06 UTC. --
	May 20 11:36:41 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:41Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 11:36:46 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 11:36:50 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:50Z" level=error msg="ContainerStats resp: {0x4000725440 linux}"
	May 20 11:36:50 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:50Z" level=error msg="ContainerStats resp: {0x40007dab80 linux}"
	May 20 11:36:51 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 11:36:51 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:51Z" level=error msg="ContainerStats resp: {0x4000958d40 linux}"
	May 20 11:36:52 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:52Z" level=error msg="ContainerStats resp: {0x400064e040 linux}"
	May 20 11:36:52 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:52Z" level=error msg="ContainerStats resp: {0x400064e940 linux}"
	May 20 11:36:52 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:52Z" level=error msg="ContainerStats resp: {0x4000958b00 linux}"
	May 20 11:36:52 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:52Z" level=error msg="ContainerStats resp: {0x400064ef40 linux}"
	May 20 11:36:52 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:52Z" level=error msg="ContainerStats resp: {0x40009594c0 linux}"
	May 20 11:36:52 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:52Z" level=error msg="ContainerStats resp: {0x4000959d00 linux}"
	May 20 11:36:52 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:52Z" level=error msg="ContainerStats resp: {0x40008543c0 linux}"
	May 20 11:36:56 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:36:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 11:37:01 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 11:37:02 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:02Z" level=error msg="ContainerStats resp: {0x40007db380 linux}"
	May 20 11:37:02 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:02Z" level=error msg="ContainerStats resp: {0x40007dba40 linux}"
	May 20 11:37:03 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:03Z" level=error msg="ContainerStats resp: {0x4000671880 linux}"
	May 20 11:37:04 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:04Z" level=error msg="ContainerStats resp: {0x400064e040 linux}"
	May 20 11:37:04 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:04Z" level=error msg="ContainerStats resp: {0x400064e780 linux}"
	May 20 11:37:04 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:04Z" level=error msg="ContainerStats resp: {0x4000854400 linux}"
	May 20 11:37:04 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:04Z" level=error msg="ContainerStats resp: {0x4000854b40 linux}"
	May 20 11:37:04 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:04Z" level=error msg="ContainerStats resp: {0x4000854f80 linux}"
	May 20 11:37:04 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:04Z" level=error msg="ContainerStats resp: {0x4000855380 linux}"
	May 20 11:37:04 running-upgrade-901000 cri-dockerd[3039]: time="2024-05-20T11:37:04Z" level=error msg="ContainerStats resp: {0x4000855800 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c8137d3e26c2d       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   c9bf360059f70
	7b48bfdfe4961       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   dfe9f616e705d
	98f0fbb43f9f2       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   dfe9f616e705d
	99862b875156e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c9bf360059f70
	4c5ff16f80c88       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   6483c41541d1d
	ff32a3db6bb61       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   21c3e877057a3
	9c4a27e0ad15e       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   96eab5787e58d
	ce527c47d1569       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   fe954213a2f1e
	65d56dff22699       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   05fd2d55d8f7c
	f957f8a7d0850       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   cfaac14cffed7
	
	
	==> coredns [7b48bfdfe496] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8834521237551987711.5992457533038946957. HINFO: read udp 10.244.0.2:57128->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8834521237551987711.5992457533038946957. HINFO: read udp 10.244.0.2:44729->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8834521237551987711.5992457533038946957. HINFO: read udp 10.244.0.2:42793->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8834521237551987711.5992457533038946957. HINFO: read udp 10.244.0.2:52883->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8834521237551987711.5992457533038946957. HINFO: read udp 10.244.0.2:57088->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8834521237551987711.5992457533038946957. HINFO: read udp 10.244.0.2:60231->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8834521237551987711.5992457533038946957. HINFO: read udp 10.244.0.2:60799->10.0.2.3:53: i/o timeout
	
	
	==> coredns [98f0fbb43f9f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1678171726822108444.3372271812656622850. HINFO: read udp 10.244.0.2:52441->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1678171726822108444.3372271812656622850. HINFO: read udp 10.244.0.2:46961->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1678171726822108444.3372271812656622850. HINFO: read udp 10.244.0.2:35976->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1678171726822108444.3372271812656622850. HINFO: read udp 10.244.0.2:54374->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1678171726822108444.3372271812656622850. HINFO: read udp 10.244.0.2:54261->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [99862b875156] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:35995->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:41387->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:39835->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:59025->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:34455->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:49041->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:44980->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:54677->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:50437->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2774040729468124648.4757072686278997448. HINFO: read udp 10.244.0.3:33259->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c8137d3e26c2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5125654558044160284.2210619874389027706. HINFO: read udp 10.244.0.3:53984->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5125654558044160284.2210619874389027706. HINFO: read udp 10.244.0.3:54674->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5125654558044160284.2210619874389027706. HINFO: read udp 10.244.0.3:41374->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5125654558044160284.2210619874389027706. HINFO: read udp 10.244.0.3:41621->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5125654558044160284.2210619874389027706. HINFO: read udp 10.244.0.3:39098->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5125654558044160284.2210619874389027706. HINFO: read udp 10.244.0.3:48855->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-901000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-901000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb
	                    minikube.k8s.io/name=running-upgrade-901000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T04_32_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:32:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-901000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:37:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:32:49 +0000   Mon, 20 May 2024 11:32:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:32:49 +0000   Mon, 20 May 2024 11:32:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:32:49 +0000   Mon, 20 May 2024 11:32:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:32:49 +0000   Mon, 20 May 2024 11:32:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-901000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 c543e457f6a3430fadc970a8d9316f2f
	  System UUID:                c543e457f6a3430fadc970a8d9316f2f
	  Boot ID:                    df52099a-60cf-466f-98fc-030dca5e129c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-s6b57                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-z7tnl                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-901000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-901000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-901000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-r5bjv                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-901000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-901000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-901000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-901000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-901000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-901000 event: Registered Node running-upgrade-901000 in Controller
	
	
	==> dmesg <==
	[  +2.466496] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.084167] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[May20 11:28] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.137411] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.083430] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.088248] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +2.499213] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +9.143658] systemd-fstab-generator[1929]: Ignoring "noauto" for root device
	[  +2.795764] systemd-fstab-generator[2209]: Ignoring "noauto" for root device
	[  +0.139240] systemd-fstab-generator[2243]: Ignoring "noauto" for root device
	[  +0.094008] systemd-fstab-generator[2254]: Ignoring "noauto" for root device
	[  +0.080248] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[ +13.427349] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.203775] systemd-fstab-generator[2995]: Ignoring "noauto" for root device
	[  +0.074698] systemd-fstab-generator[3007]: Ignoring "noauto" for root device
	[  +0.077087] systemd-fstab-generator[3018]: Ignoring "noauto" for root device
	[  +0.083616] systemd-fstab-generator[3032]: Ignoring "noauto" for root device
	[  +1.956635] systemd-fstab-generator[3183]: Ignoring "noauto" for root device
	[  +4.291948] systemd-fstab-generator[3574]: Ignoring "noauto" for root device
	[  +1.298893] systemd-fstab-generator[3842]: Ignoring "noauto" for root device
	[ +19.028367] kauditd_printk_skb: 68 callbacks suppressed
	[May20 11:32] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.346432] systemd-fstab-generator[11904]: Ignoring "noauto" for root device
	[  +5.641362] systemd-fstab-generator[12516]: Ignoring "noauto" for root device
	[  +0.483547] systemd-fstab-generator[12649]: Ignoring "noauto" for root device
	
	
	==> etcd [65d56dff2269] <==
	{"level":"info","ts":"2024-05-20T11:32:44.552Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:32:44.552Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:32:44.552Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:32:44.552Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-20T11:32:44.552Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-20T11:32:44.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-05-20T11:32:44.552Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-20T11:32:45.186Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:32:45.187Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:32:45.188Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:32:45.188Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:32:45.188Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-901000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:32:45.188Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:32:45.188Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:32:45.189Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:32:45.189Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-05-20T11:32:45.188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:32:45.189Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:37:06 up 9 min,  0 users,  load average: 0.08, 0.24, 0.15
	Linux running-upgrade-901000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ce527c47d156] <==
	I0520 11:32:46.448352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 11:32:46.448440       1 cache.go:39] Caches are synced for autoregister controller
	I0520 11:32:46.448575       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0520 11:32:46.449077       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 11:32:46.449115       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0520 11:32:46.457116       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0520 11:32:46.476698       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0520 11:32:47.187095       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 11:32:47.357257       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 11:32:47.362189       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 11:32:47.362348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 11:32:47.541385       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 11:32:47.553348       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 11:32:47.625843       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0520 11:32:47.628013       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0520 11:32:47.628381       1 controller.go:611] quota admission added evaluator for: endpoints
	I0520 11:32:47.629656       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 11:32:48.474591       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0520 11:32:48.915873       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0520 11:32:48.918835       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0520 11:32:48.957457       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0520 11:32:48.970316       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 11:33:01.880855       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0520 11:33:02.132449       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0520 11:33:02.444348       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [f957f8a7d085] <==
	I0520 11:33:01.339953       1 range_allocator.go:374] Set node running-upgrade-901000 PodCIDR to [10.244.0.0/24]
	I0520 11:33:01.340998       1 shared_informer.go:262] Caches are synced for ephemeral
	I0520 11:33:01.343712       1 shared_informer.go:262] Caches are synced for daemon sets
	I0520 11:33:01.344984       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0520 11:33:01.345031       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0520 11:33:01.347502       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0520 11:33:01.348557       1 shared_informer.go:262] Caches are synced for PV protection
	I0520 11:33:01.361126       1 shared_informer.go:262] Caches are synced for deployment
	I0520 11:33:01.378650       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0520 11:33:01.383057       1 shared_informer.go:262] Caches are synced for stateful set
	I0520 11:33:01.425362       1 shared_informer.go:262] Caches are synced for attach detach
	I0520 11:33:01.525145       1 shared_informer.go:262] Caches are synced for endpoint
	I0520 11:33:01.526255       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0520 11:33:01.557024       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 11:33:01.565095       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 11:33:01.575954       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0520 11:33:01.591061       1 shared_informer.go:262] Caches are synced for disruption
	I0520 11:33:01.591106       1 disruption.go:371] Sending events to api server.
	I0520 11:33:01.886503       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r5bjv"
	I0520 11:33:01.978267       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 11:33:02.025058       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 11:33:02.025110       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0520 11:33:02.134093       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0520 11:33:02.331060       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-s6b57"
	I0520 11:33:02.334120       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-z7tnl"
	
	
	==> kube-proxy [ff32a3db6bb6] <==
	I0520 11:33:02.417940       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0520 11:33:02.417968       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0520 11:33:02.417979       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0520 11:33:02.441684       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0520 11:33:02.441739       1 server_others.go:206] "Using iptables Proxier"
	I0520 11:33:02.441756       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0520 11:33:02.441852       1 server.go:661] "Version info" version="v1.24.1"
	I0520 11:33:02.441856       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:33:02.442082       1 config.go:317] "Starting service config controller"
	I0520 11:33:02.442091       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0520 11:33:02.442102       1 config.go:226] "Starting endpoint slice config controller"
	I0520 11:33:02.442103       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0520 11:33:02.442940       1 config.go:444] "Starting node config controller"
	I0520 11:33:02.442952       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0520 11:33:02.543187       1 shared_informer.go:262] Caches are synced for node config
	I0520 11:33:02.543187       1 shared_informer.go:262] Caches are synced for service config
	I0520 11:33:02.543221       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9c4a27e0ad15] <==
	W0520 11:32:46.409907       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 11:32:46.409911       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 11:32:46.409943       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:32:46.409951       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:32:46.409968       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:32:46.409975       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:32:46.409992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:32:46.410014       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 11:32:46.410034       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 11:32:46.410041       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 11:32:46.410060       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:32:46.410066       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:32:46.410098       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 11:32:46.410104       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 11:32:46.410131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:32:46.410137       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 11:32:46.410153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:32:46.410159       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 11:32:46.410189       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:32:46.410193       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:32:46.410205       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 11:32:46.410208       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 11:32:47.339366       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:32:47.339453       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0520 11:32:47.696802       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-05-20 11:27:48 UTC, ends at Mon 2024-05-20 11:37:06 UTC. --
	May 20 11:32:49 running-upgrade-901000 kubelet[12522]: I0520 11:32:49.170422   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2acf5f764502a2d715092c929a7df501-k8s-certs\") pod \"kube-apiserver-running-upgrade-901000\" (UID: \"2acf5f764502a2d715092c929a7df501\") " pod="kube-system/kube-apiserver-running-upgrade-901000"
	May 20 11:32:49 running-upgrade-901000 kubelet[12522]: I0520 11:32:49.170427   12522 reconciler.go:157] "Reconciler: start to sync state"
	May 20 11:32:49 running-upgrade-901000 kubelet[12522]: E0520 11:32:49.546830   12522 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-901000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-901000"
	May 20 11:32:49 running-upgrade-901000 kubelet[12522]: E0520 11:32:49.746781   12522 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-901000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-901000"
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: I0520 11:33:01.337848   12522 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: I0520 11:33:01.368148   12522 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: I0520 11:33:01.368561   12522 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: I0520 11:33:01.468419   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxv5\" (UniqueName: \"kubernetes.io/projected/1dec4c7b-4582-462d-8e45-4ff0b9034951-kube-api-access-2zxv5\") pod \"storage-provisioner\" (UID: \"1dec4c7b-4582-462d-8e45-4ff0b9034951\") " pod="kube-system/storage-provisioner"
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: I0520 11:33:01.468447   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1dec4c7b-4582-462d-8e45-4ff0b9034951-tmp\") pod \"storage-provisioner\" (UID: \"1dec4c7b-4582-462d-8e45-4ff0b9034951\") " pod="kube-system/storage-provisioner"
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: E0520 11:33:01.572318   12522 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: E0520 11:33:01.572339   12522 projected.go:192] Error preparing data for projected volume kube-api-access-2zxv5 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: E0520 11:33:01.572379   12522 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/1dec4c7b-4582-462d-8e45-4ff0b9034951-kube-api-access-2zxv5 podName:1dec4c7b-4582-462d-8e45-4ff0b9034951 nodeName:}" failed. No retries permitted until 2024-05-20 11:33:02.072363516 +0000 UTC m=+13.167659182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zxv5" (UniqueName: "kubernetes.io/projected/1dec4c7b-4582-462d-8e45-4ff0b9034951-kube-api-access-2zxv5") pod "storage-provisioner" (UID: "1dec4c7b-4582-462d-8e45-4ff0b9034951") : configmap "kube-root-ca.crt" not found
	May 20 11:33:01 running-upgrade-901000 kubelet[12522]: I0520 11:33:01.888651   12522 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.076574   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e7a8d54-d711-477d-881c-5dcea2dadc42-kube-proxy\") pod \"kube-proxy-r5bjv\" (UID: \"4e7a8d54-d711-477d-881c-5dcea2dadc42\") " pod="kube-system/kube-proxy-r5bjv"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.076609   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e7a8d54-d711-477d-881c-5dcea2dadc42-xtables-lock\") pod \"kube-proxy-r5bjv\" (UID: \"4e7a8d54-d711-477d-881c-5dcea2dadc42\") " pod="kube-system/kube-proxy-r5bjv"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.076640   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e7a8d54-d711-477d-881c-5dcea2dadc42-lib-modules\") pod \"kube-proxy-r5bjv\" (UID: \"4e7a8d54-d711-477d-881c-5dcea2dadc42\") " pod="kube-system/kube-proxy-r5bjv"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.076653   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfxt\" (UniqueName: \"kubernetes.io/projected/4e7a8d54-d711-477d-881c-5dcea2dadc42-kube-api-access-7nfxt\") pod \"kube-proxy-r5bjv\" (UID: \"4e7a8d54-d711-477d-881c-5dcea2dadc42\") " pod="kube-system/kube-proxy-r5bjv"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.333678   12522 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.345107   12522 topology_manager.go:200] "Topology Admit Handler"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.479354   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8sd\" (UniqueName: \"kubernetes.io/projected/2191ee54-08eb-4858-ae1c-10fa94ec8ccd-kube-api-access-pv8sd\") pod \"coredns-6d4b75cb6d-z7tnl\" (UID: \"2191ee54-08eb-4858-ae1c-10fa94ec8ccd\") " pod="kube-system/coredns-6d4b75cb6d-z7tnl"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.479388   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc2e1a2f-0774-4ca8-ba1f-930339fd7a0c-config-volume\") pod \"coredns-6d4b75cb6d-s6b57\" (UID: \"fc2e1a2f-0774-4ca8-ba1f-930339fd7a0c\") " pod="kube-system/coredns-6d4b75cb6d-s6b57"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.479400   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2191ee54-08eb-4858-ae1c-10fa94ec8ccd-config-volume\") pod \"coredns-6d4b75cb6d-z7tnl\" (UID: \"2191ee54-08eb-4858-ae1c-10fa94ec8ccd\") " pod="kube-system/coredns-6d4b75cb6d-z7tnl"
	May 20 11:33:02 running-upgrade-901000 kubelet[12522]: I0520 11:33:02.479412   12522 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkm7\" (UniqueName: \"kubernetes.io/projected/fc2e1a2f-0774-4ca8-ba1f-930339fd7a0c-kube-api-access-qbkm7\") pod \"coredns-6d4b75cb6d-s6b57\" (UID: \"fc2e1a2f-0774-4ca8-ba1f-930339fd7a0c\") " pod="kube-system/coredns-6d4b75cb6d-s6b57"
	May 20 11:36:41 running-upgrade-901000 kubelet[12522]: I0520 11:36:41.424444   12522 scope.go:110] "RemoveContainer" containerID="3964253b5a3a8400d075a65af491b0a6c387d6ff326cb706087aaec08ad7c7c7"
	May 20 11:36:41 running-upgrade-901000 kubelet[12522]: I0520 11:36:41.437055   12522 scope.go:110] "RemoveContainer" containerID="3e28d2642e4244a23ca32777b83684b611b1809418bd3d4c8f73be065f3aea36"
	
	
	==> storage-provisioner [4c5ff16f80c8] <==
	I0520 11:33:02.514194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:33:02.520554       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:33:02.520578       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:33:02.523723       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:33:02.523776       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-901000_9afeceab-7078-4f19-a6b0-74cea90f6bcc!
	I0520 11:33:02.524226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83ca50a0-b799-477c-a6b0-0ea3fbf06f61", APIVersion:"v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-901000_9afeceab-7078-4f19-a6b0-74cea90f6bcc became leader
	I0520 11:33:02.625645       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-901000_9afeceab-7078-4f19-a6b0-74cea90f6bcc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-901000 -n running-upgrade-901000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-901000 -n running-upgrade-901000: exit status 2 (15.717248625s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-901000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-901000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-901000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-901000: (1.081595708s)
--- FAIL: TestRunningBinaryUpgrade (601.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-815000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-815000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.062747s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-815000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-815000" primary control-plane node in "kubernetes-upgrade-815000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-815000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:30:22.394606   16882 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:30:22.394734   16882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:30:22.394736   16882 out.go:304] Setting ErrFile to fd 2...
	I0520 04:30:22.394739   16882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:30:22.394869   16882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:30:22.395965   16882 out.go:298] Setting JSON to false
	I0520 04:30:22.412039   16882 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8993,"bootTime":1716195629,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:30:22.412112   16882 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:30:22.416732   16882 out.go:177] * [kubernetes-upgrade-815000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:30:22.424965   16882 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:30:22.425004   16882 notify.go:220] Checking for updates...
	I0520 04:30:22.428888   16882 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:30:22.431901   16882 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:30:22.434889   16882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:30:22.437845   16882 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:30:22.440893   16882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:30:22.444252   16882 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:30:22.444324   16882 config.go:182] Loaded profile config "running-upgrade-901000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:30:22.444373   16882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:30:22.447785   16882 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:30:22.454870   16882 start.go:297] selected driver: qemu2
	I0520 04:30:22.454879   16882 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:30:22.454886   16882 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:30:22.457185   16882 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:30:22.458406   16882 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:30:22.460908   16882 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:30:22.460919   16882 cni.go:84] Creating CNI manager for ""
	I0520 04:30:22.460927   16882 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:30:22.460963   16882 start.go:340] cluster config:
	{Name:kubernetes-upgrade-815000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:30:22.465375   16882 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:30:22.472853   16882 out.go:177] * Starting "kubernetes-upgrade-815000" primary control-plane node in "kubernetes-upgrade-815000" cluster
	I0520 04:30:22.476863   16882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:30:22.476883   16882 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:30:22.476893   16882 cache.go:56] Caching tarball of preloaded images
	I0520 04:30:22.476972   16882 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:30:22.476977   16882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:30:22.477038   16882 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/kubernetes-upgrade-815000/config.json ...
	I0520 04:30:22.477049   16882 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/kubernetes-upgrade-815000/config.json: {Name:mk326a7563e7287325e0a5b08517242e7788456a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:30:22.477342   16882 start.go:360] acquireMachinesLock for kubernetes-upgrade-815000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:30:22.477377   16882 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "kubernetes-upgrade-815000"
	I0520 04:30:22.477389   16882 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:30:22.477419   16882 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:30:22.485793   16882 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:30:22.512269   16882 start.go:159] libmachine.API.Create for "kubernetes-upgrade-815000" (driver="qemu2")
	I0520 04:30:22.512297   16882 client.go:168] LocalClient.Create starting
	I0520 04:30:22.512371   16882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:30:22.512401   16882 main.go:141] libmachine: Decoding PEM data...
	I0520 04:30:22.512411   16882 main.go:141] libmachine: Parsing certificate...
	I0520 04:30:22.512454   16882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:30:22.512478   16882 main.go:141] libmachine: Decoding PEM data...
	I0520 04:30:22.512486   16882 main.go:141] libmachine: Parsing certificate...
	I0520 04:30:22.512897   16882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:30:22.676560   16882 main.go:141] libmachine: Creating SSH key...
	I0520 04:30:22.829631   16882 main.go:141] libmachine: Creating Disk image...
	I0520 04:30:22.829638   16882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:30:22.829862   16882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:22.851433   16882 main.go:141] libmachine: STDOUT: 
	I0520 04:30:22.851455   16882 main.go:141] libmachine: STDERR: 
	I0520 04:30:22.851504   16882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2 +20000M
	I0520 04:30:22.862590   16882 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:30:22.862609   16882 main.go:141] libmachine: STDERR: 
	I0520 04:30:22.862629   16882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:22.862633   16882 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:30:22.862662   16882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:06:71:98:18:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:22.864395   16882 main.go:141] libmachine: STDOUT: 
	I0520 04:30:22.864412   16882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:30:22.864430   16882 client.go:171] duration metric: took 352.132167ms to LocalClient.Create
	I0520 04:30:24.866616   16882 start.go:128] duration metric: took 2.389197666s to createHost
	I0520 04:30:24.866741   16882 start.go:83] releasing machines lock for "kubernetes-upgrade-815000", held for 2.389370083s
	W0520 04:30:24.866824   16882 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:30:24.874223   16882 out.go:177] * Deleting "kubernetes-upgrade-815000" in qemu2 ...
	W0520 04:30:24.901519   16882 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:30:24.901557   16882 start.go:728] Will try again in 5 seconds ...
	I0520 04:30:29.903748   16882 start.go:360] acquireMachinesLock for kubernetes-upgrade-815000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:30:29.904250   16882 start.go:364] duration metric: took 382.167µs to acquireMachinesLock for "kubernetes-upgrade-815000"
	I0520 04:30:29.904377   16882 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:30:29.904668   16882 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:30:29.914283   16882 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:30:29.966902   16882 start.go:159] libmachine.API.Create for "kubernetes-upgrade-815000" (driver="qemu2")
	I0520 04:30:29.966950   16882 client.go:168] LocalClient.Create starting
	I0520 04:30:29.967064   16882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:30:29.967124   16882 main.go:141] libmachine: Decoding PEM data...
	I0520 04:30:29.967140   16882 main.go:141] libmachine: Parsing certificate...
	I0520 04:30:29.967211   16882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:30:29.967254   16882 main.go:141] libmachine: Decoding PEM data...
	I0520 04:30:29.967268   16882 main.go:141] libmachine: Parsing certificate...
	I0520 04:30:29.967920   16882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:30:30.117272   16882 main.go:141] libmachine: Creating SSH key...
	I0520 04:30:30.359140   16882 main.go:141] libmachine: Creating Disk image...
	I0520 04:30:30.359154   16882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:30:30.359390   16882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:30.372378   16882 main.go:141] libmachine: STDOUT: 
	I0520 04:30:30.372405   16882 main.go:141] libmachine: STDERR: 
	I0520 04:30:30.372490   16882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2 +20000M
	I0520 04:30:30.383884   16882 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:30:30.383899   16882 main.go:141] libmachine: STDERR: 
	I0520 04:30:30.383913   16882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:30.383919   16882 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:30:30.383960   16882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:dd:37:3e:f0:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:30.385695   16882 main.go:141] libmachine: STDOUT: 
	I0520 04:30:30.385709   16882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:30:30.385724   16882 client.go:171] duration metric: took 418.769459ms to LocalClient.Create
	I0520 04:30:32.386754   16882 start.go:128] duration metric: took 2.482066375s to createHost
	I0520 04:30:32.386849   16882 start.go:83] releasing machines lock for "kubernetes-upgrade-815000", held for 2.482560709s
	W0520 04:30:32.387273   16882 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-815000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-815000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:30:32.399125   16882 out.go:177] 
	W0520 04:30:32.403141   16882 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:30:32.403179   16882 out.go:239] * 
	* 
	W0520 04:30:32.405680   16882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:30:32.416114   16882 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-815000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-815000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-815000: (2.159184208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-815000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-815000 status --format={{.Host}}: exit status 7 (46.841625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-815000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-815000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.175379834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-815000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-815000" primary control-plane node in "kubernetes-upgrade-815000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:30:34.666172   16915 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:30:34.666304   16915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:30:34.666307   16915 out.go:304] Setting ErrFile to fd 2...
	I0520 04:30:34.666313   16915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:30:34.666444   16915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:30:34.667470   16915 out.go:298] Setting JSON to false
	I0520 04:30:34.686030   16915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9005,"bootTime":1716195629,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:30:34.686125   16915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:30:34.690344   16915 out.go:177] * [kubernetes-upgrade-815000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:30:34.698360   16915 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:30:34.698457   16915 notify.go:220] Checking for updates...
	I0520 04:30:34.705316   16915 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:30:34.708341   16915 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:30:34.711327   16915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:30:34.714361   16915 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:30:34.717328   16915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:30:34.720656   16915 config.go:182] Loaded profile config "kubernetes-upgrade-815000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 04:30:34.720909   16915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:30:34.725233   16915 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:30:34.732341   16915 start.go:297] selected driver: qemu2
	I0520 04:30:34.732352   16915 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-815000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:30:34.732404   16915 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:30:34.734856   16915 cni.go:84] Creating CNI manager for ""
	I0520 04:30:34.734875   16915 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:30:34.734897   16915 start.go:340] cluster config:
	{Name:kubernetes-upgrade-815000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-815000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:30:34.739492   16915 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:30:34.746345   16915 out.go:177] * Starting "kubernetes-upgrade-815000" primary control-plane node in "kubernetes-upgrade-815000" cluster
	I0520 04:30:34.750284   16915 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:30:34.750309   16915 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:30:34.750322   16915 cache.go:56] Caching tarball of preloaded images
	I0520 04:30:34.750400   16915 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:30:34.750408   16915 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:30:34.750461   16915 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/kubernetes-upgrade-815000/config.json ...
	I0520 04:30:34.750804   16915 start.go:360] acquireMachinesLock for kubernetes-upgrade-815000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:30:34.750835   16915 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "kubernetes-upgrade-815000"
	I0520 04:30:34.750844   16915 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:30:34.750851   16915 fix.go:54] fixHost starting: 
	I0520 04:30:34.750961   16915 fix.go:112] recreateIfNeeded on kubernetes-upgrade-815000: state=Stopped err=<nil>
	W0520 04:30:34.750970   16915 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:30:34.758069   16915 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-815000" ...
	I0520 04:30:34.762352   16915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:dd:37:3e:f0:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:34.764547   16915 main.go:141] libmachine: STDOUT: 
	I0520 04:30:34.764570   16915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:30:34.764600   16915 fix.go:56] duration metric: took 13.750542ms for fixHost
	I0520 04:30:34.764605   16915 start.go:83] releasing machines lock for "kubernetes-upgrade-815000", held for 13.766208ms
	W0520 04:30:34.764613   16915 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:30:34.764649   16915 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:30:34.764653   16915 start.go:728] Will try again in 5 seconds ...
	I0520 04:30:39.766715   16915 start.go:360] acquireMachinesLock for kubernetes-upgrade-815000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:30:39.766960   16915 start.go:364] duration metric: took 203.916µs to acquireMachinesLock for "kubernetes-upgrade-815000"
	I0520 04:30:39.767014   16915 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:30:39.767023   16915 fix.go:54] fixHost starting: 
	I0520 04:30:39.767284   16915 fix.go:112] recreateIfNeeded on kubernetes-upgrade-815000: state=Stopped err=<nil>
	W0520 04:30:39.767292   16915 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:30:39.776495   16915 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-815000" ...
	I0520 04:30:39.780529   16915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:dd:37:3e:f0:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubernetes-upgrade-815000/disk.qcow2
	I0520 04:30:39.783901   16915 main.go:141] libmachine: STDOUT: 
	I0520 04:30:39.783927   16915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:30:39.783955   16915 fix.go:56] duration metric: took 16.932583ms for fixHost
	I0520 04:30:39.783960   16915 start.go:83] releasing machines lock for "kubernetes-upgrade-815000", held for 16.989417ms
	W0520 04:30:39.784024   16915 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-815000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-815000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:30:39.791450   16915 out.go:177] 
	W0520 04:30:39.794502   16915 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:30:39.794509   16915 out.go:239] * 
	* 
	W0520 04:30:39.795252   16915 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:30:39.805439   16915 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-815000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-815000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-815000 version --output=json: exit status 1 (35.938875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-815000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-05-20 04:30:39.850801 -0700 PDT m=+927.356266501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-815000 -n kubernetes-upgrade-815000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-815000 -n kubernetes-upgrade-815000: exit status 7 (27.983375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-815000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-815000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-815000
--- FAIL: TestKubernetesUpgrade (17.61s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18932
- KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current472407876/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.54s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18932
- KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current96026350/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (576.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1161568740 start -p stopped-upgrade-484000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1161568740 start -p stopped-upgrade-484000 --memory=2200 --vm-driver=qemu2 : (41.829222s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1161568740 -p stopped-upgrade-484000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1161568740 -p stopped-upgrade-484000 stop: (12.119882666s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-484000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-484000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.5803985s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-484000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-484000" primary control-plane node in "stopped-upgrade-484000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-484000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:31:34.979580   16966 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:31:34.979752   16966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:31:34.979756   16966 out.go:304] Setting ErrFile to fd 2...
	I0520 04:31:34.979758   16966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:31:34.979920   16966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:31:34.981092   16966 out.go:298] Setting JSON to false
	I0520 04:31:35.000555   16966 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9065,"bootTime":1716195629,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:31:35.000623   16966 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:31:35.006007   16966 out.go:177] * [stopped-upgrade-484000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:31:35.012989   16966 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:31:35.016962   16966 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:31:35.013061   16966 notify.go:220] Checking for updates...
	I0520 04:31:35.022949   16966 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:31:35.026047   16966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:31:35.029008   16966 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:31:35.031945   16966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:31:35.035327   16966 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:31:35.036883   16966 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 04:31:35.040048   16966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:31:35.043999   16966 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:31:35.050970   16966 start.go:297] selected driver: qemu2
	I0520 04:31:35.050978   16966 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:31:35.051060   16966 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:31:35.053706   16966 cni.go:84] Creating CNI manager for ""
	I0520 04:31:35.053724   16966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:31:35.053752   16966 start.go:340] cluster config:
	{Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:31:35.053811   16966 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:31:35.060873   16966 out.go:177] * Starting "stopped-upgrade-484000" primary control-plane node in "stopped-upgrade-484000" cluster
	I0520 04:31:35.064911   16966 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:31:35.064927   16966 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 04:31:35.064938   16966 cache.go:56] Caching tarball of preloaded images
	I0520 04:31:35.064989   16966 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:31:35.064995   16966 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 04:31:35.065055   16966 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0520 04:31:35.065461   16966 start.go:360] acquireMachinesLock for stopped-upgrade-484000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:31:35.065496   16966 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "stopped-upgrade-484000"
	I0520 04:31:35.065507   16966 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:31:35.065513   16966 fix.go:54] fixHost starting: 
	I0520 04:31:35.065630   16966 fix.go:112] recreateIfNeeded on stopped-upgrade-484000: state=Stopped err=<nil>
	W0520 04:31:35.065638   16966 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:31:35.073936   16966 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-484000" ...
	I0520 04:31:35.078095   16966 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53162-:22,hostfwd=tcp::53163-:2376,hostname=stopped-upgrade-484000 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/disk.qcow2
	I0520 04:31:35.125165   16966 main.go:141] libmachine: STDOUT: 
	I0520 04:31:35.125191   16966 main.go:141] libmachine: STDERR: 
	I0520 04:31:35.125196   16966 main.go:141] libmachine: Waiting for VM to start (ssh -p 53162 docker@127.0.0.1)...
	I0520 04:31:55.284637   16966 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/config.json ...
	I0520 04:31:55.285452   16966 machine.go:94] provisionDockerMachine start ...
	I0520 04:31:55.285695   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.286304   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.286329   16966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 04:31:55.372642   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 04:31:55.372675   16966 buildroot.go:166] provisioning hostname "stopped-upgrade-484000"
	I0520 04:31:55.372822   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.373174   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.373185   16966 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-484000 && echo "stopped-upgrade-484000" | sudo tee /etc/hostname
	I0520 04:31:55.446939   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-484000
	
	I0520 04:31:55.447003   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.447153   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.447164   16966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-484000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-484000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-484000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 04:31:55.514228   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 04:31:55.514241   16966 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18932-14402/.minikube CaCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18932-14402/.minikube}
	I0520 04:31:55.514259   16966 buildroot.go:174] setting up certificates
	I0520 04:31:55.514265   16966 provision.go:84] configureAuth start
	I0520 04:31:55.514274   16966 provision.go:143] copyHostCerts
	I0520 04:31:55.514359   16966 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem, removing ...
	I0520 04:31:55.514367   16966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem
	I0520 04:31:55.514508   16966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.pem (1078 bytes)
	I0520 04:31:55.514738   16966 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem, removing ...
	I0520 04:31:55.514742   16966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem
	I0520 04:31:55.514811   16966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/cert.pem (1123 bytes)
	I0520 04:31:55.514949   16966 exec_runner.go:144] found /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem, removing ...
	I0520 04:31:55.514954   16966 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem
	I0520 04:31:55.515019   16966 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18932-14402/.minikube/key.pem (1679 bytes)
	I0520 04:31:55.515143   16966 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-484000 san=[127.0.0.1 localhost minikube stopped-upgrade-484000]
	I0520 04:31:55.593875   16966 provision.go:177] copyRemoteCerts
	I0520 04:31:55.593927   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 04:31:55.593940   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:31:55.625584   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 04:31:55.632248   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 04:31:55.638879   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 04:31:55.646369   16966 provision.go:87] duration metric: took 132.100084ms to configureAuth
	I0520 04:31:55.646379   16966 buildroot.go:189] setting minikube options for container-runtime
	I0520 04:31:55.646499   16966 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:31:55.646533   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.646620   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.646627   16966 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 04:31:55.707828   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 04:31:55.707836   16966 buildroot.go:70] root file system type: tmpfs
	I0520 04:31:55.707890   16966 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 04:31:55.707938   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.708039   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.708074   16966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 04:31:55.771097   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 04:31:55.771145   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:55.771241   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:55.771249   16966 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 04:31:56.136506   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 04:31:56.136522   16966 machine.go:97] duration metric: took 851.070375ms to provisionDockerMachine
	I0520 04:31:56.136531   16966 start.go:293] postStartSetup for "stopped-upgrade-484000" (driver="qemu2")
	I0520 04:31:56.136538   16966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 04:31:56.136628   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 04:31:56.136643   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:31:56.168326   16966 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 04:31:56.169753   16966 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 04:31:56.169762   16966 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18932-14402/.minikube/addons for local assets ...
	I0520 04:31:56.169858   16966 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18932-14402/.minikube/files for local assets ...
	I0520 04:31:56.169995   16966 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem -> 148952.pem in /etc/ssl/certs
	I0520 04:31:56.170126   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 04:31:56.174656   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem --> /etc/ssl/certs/148952.pem (1708 bytes)
	I0520 04:31:56.183154   16966 start.go:296] duration metric: took 46.615708ms for postStartSetup
	I0520 04:31:56.183175   16966 fix.go:56] duration metric: took 21.11791725s for fixHost
	I0520 04:31:56.183222   16966 main.go:141] libmachine: Using SSH client type: native
	I0520 04:31:56.183338   16966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104616900] 0x104619160 <nil>  [] 0s} localhost 53162 <nil> <nil>}
	I0520 04:31:56.183342   16966 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 04:31:56.246653   16966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716204716.018939379
	
	I0520 04:31:56.246662   16966 fix.go:216] guest clock: 1716204716.018939379
	I0520 04:31:56.246667   16966 fix.go:229] Guest: 2024-05-20 04:31:56.018939379 -0700 PDT Remote: 2024-05-20 04:31:56.183177 -0700 PDT m=+21.233222792 (delta=-164.237621ms)
	I0520 04:31:56.246678   16966 fix.go:200] guest clock delta is within tolerance: -164.237621ms
	I0520 04:31:56.246680   16966 start.go:83] releasing machines lock for "stopped-upgrade-484000", held for 21.181434666s
	I0520 04:31:56.246752   16966 ssh_runner.go:195] Run: cat /version.json
	I0520 04:31:56.246758   16966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 04:31:56.246761   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:31:56.246780   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	W0520 04:31:56.279363   16966 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 04:31:56.279417   16966 ssh_runner.go:195] Run: systemctl --version
	I0520 04:31:56.437131   16966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 04:31:56.439872   16966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 04:31:56.439914   16966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 04:31:56.444210   16966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 04:31:56.450888   16966 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 04:31:56.450898   16966 start.go:494] detecting cgroup driver to use...
	I0520 04:31:56.450990   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:31:56.459115   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 04:31:56.462967   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 04:31:56.466522   16966 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 04:31:56.466546   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 04:31:56.470055   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:31:56.473320   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 04:31:56.476023   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 04:31:56.478913   16966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 04:31:56.482333   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 04:31:56.485521   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 04:31:56.488275   16966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 04:31:56.491094   16966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 04:31:56.494336   16966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 04:31:56.497178   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:56.563678   16966 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 04:31:56.570637   16966 start.go:494] detecting cgroup driver to use...
	I0520 04:31:56.570722   16966 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 04:31:56.576387   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:31:56.588367   16966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 04:31:56.595126   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 04:31:56.600043   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:31:56.604536   16966 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 04:31:56.670719   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 04:31:56.676211   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 04:31:56.681639   16966 ssh_runner.go:195] Run: which cri-dockerd
	I0520 04:31:56.682822   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 04:31:56.685697   16966 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 04:31:56.690400   16966 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 04:31:56.770535   16966 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 04:31:56.846927   16966 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 04:31:56.846996   16966 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 04:31:56.852560   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:56.934946   16966 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:31:58.109051   16966 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.174103917s)
	I0520 04:31:58.109104   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 04:31:58.113812   16966 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 04:31:58.120071   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:31:58.124836   16966 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 04:31:58.193671   16966 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 04:31:58.265036   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:58.340911   16966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 04:31:58.346330   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 04:31:58.351004   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:31:58.429946   16966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 04:31:58.469480   16966 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 04:31:58.469559   16966 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 04:31:58.471466   16966 start.go:562] Will wait 60s for crictl version
	I0520 04:31:58.471495   16966 ssh_runner.go:195] Run: which crictl
	I0520 04:31:58.472679   16966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 04:31:58.487739   16966 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 04:31:58.487812   16966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:31:58.504837   16966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 04:31:58.524725   16966 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 04:31:58.524804   16966 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 04:31:58.526303   16966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:31:58.530355   16966 kubeadm.go:877] updating cluster {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 04:31:58.530407   16966 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 04:31:58.530452   16966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:31:58.541544   16966 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:31:58.541569   16966 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:31:58.541624   16966 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:31:58.544824   16966 ssh_runner.go:195] Run: which lz4
	I0520 04:31:58.546356   16966 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 04:31:58.547775   16966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 04:31:58.547787   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 04:31:59.210684   16966 docker.go:649] duration metric: took 664.364167ms to copy over tarball
	I0520 04:31:59.210752   16966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 04:32:00.376945   16966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.166190917s)
	I0520 04:32:00.376963   16966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 04:32:00.392753   16966 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 04:32:00.395579   16966 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 04:32:00.400622   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:32:00.482479   16966 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 04:32:02.687565   16966 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.205091167s)
	I0520 04:32:02.687660   16966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 04:32:02.700863   16966 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 04:32:02.700871   16966 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 04:32:02.700881   16966 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 04:32:02.714959   16966 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:02.716002   16966 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:02.716097   16966 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 04:32:02.716128   16966 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:02.716158   16966 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:02.716281   16966 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:02.716313   16966 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:02.716361   16966 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:02.726295   16966 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:02.726339   16966 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:02.726363   16966 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:02.726391   16966 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 04:32:02.726429   16966 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:02.726473   16966 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:02.726591   16966 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:02.726502   16966 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.345035   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:03.357857   16966 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 04:32:03.357883   16966 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:03.357935   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 04:32:03.367760   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:03.368588   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 04:32:03.379564   16966 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 04:32:03.379582   16966 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:03.379627   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 04:32:03.386418   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 04:32:03.388357   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:03.392755   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 04:32:03.398975   16966 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 04:32:03.399001   16966 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 04:32:03.399056   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 04:32:03.404704   16966 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 04:32:03.404732   16966 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:03.404793   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 04:32:03.412792   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 04:32:03.412915   16966 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0520 04:32:03.416845   16966 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 04:32:03.416972   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:03.418384   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 04:32:03.418394   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 04:32:03.418407   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 04:32:03.427331   16966 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 04:32:03.427344   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 04:32:03.434656   16966 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 04:32:03.434677   16966 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:03.434731   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 04:32:03.447083   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:03.469490   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 04:32:03.469537   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 04:32:03.469552   16966 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 04:32:03.469569   16966 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:03.469616   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 04:32:03.469640   16966 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:32:03.471088   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 04:32:03.471101   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 04:32:03.474865   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.486826   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 04:32:03.497067   16966 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 04:32:03.497094   16966 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.497166   16966 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 04:32:03.517279   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 04:32:03.517398   16966 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:32:03.524655   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 04:32:03.524689   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 04:32:03.526790   16966 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 04:32:03.526799   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 04:32:03.607956   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 04:32:03.723811   16966 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 04:32:03.723825   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0520 04:32:03.873177   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0520 04:32:03.901844   16966 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 04:32:03.901953   16966 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:03.912817   16966 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 04:32:03.912850   16966 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:03.912907   16966 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:32:03.933891   16966 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 04:32:03.934013   16966 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:32:03.935359   16966 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 04:32:03.935368   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 04:32:03.967631   16966 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 04:32:03.967642   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 04:32:04.199568   16966 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 04:32:04.199607   16966 cache_images.go:92] duration metric: took 1.498738417s to LoadCachedImages
	W0520 04:32:04.199646   16966 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0520 04:32:04.199655   16966 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 04:32:04.199725   16966 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-484000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 04:32:04.199788   16966 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 04:32:04.219810   16966 cni.go:84] Creating CNI manager for ""
	I0520 04:32:04.219823   16966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:32:04.219830   16966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 04:32:04.219837   16966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-484000 NodeName:stopped-upgrade-484000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 04:32:04.219897   16966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-484000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 04:32:04.219945   16966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 04:32:04.223175   16966 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 04:32:04.223201   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 04:32:04.226383   16966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 04:32:04.231360   16966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 04:32:04.236275   16966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 04:32:04.241404   16966 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 04:32:04.242517   16966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 04:32:04.246181   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:32:04.326219   16966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:32:04.332603   16966 certs.go:68] Setting up /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000 for IP: 10.0.2.15
	I0520 04:32:04.332613   16966 certs.go:194] generating shared ca certs ...
	I0520 04:32:04.332622   16966 certs.go:226] acquiring lock for ca certs: {Name:mk68bd2733d4beefbc93944c03f6a3a33405f849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.332811   16966 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.key
	I0520 04:32:04.333584   16966 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.key
	I0520 04:32:04.333591   16966 certs.go:256] generating profile certs ...
	I0520 04:32:04.333814   16966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.key
	I0520 04:32:04.333834   16966 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968
	I0520 04:32:04.333847   16966 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 04:32:04.416053   16966 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 ...
	I0520 04:32:04.416069   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968: {Name:mkbabd86edee89dc28de2080d193c5ddccc74e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.416396   16966 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 ...
	I0520 04:32:04.416402   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968: {Name:mkd86c0394a3353f0a09a4031d227860b5b7c472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.418133   16966 certs.go:381] copying /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt.52cfa968 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt
	I0520 04:32:04.418334   16966 certs.go:385] copying /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key.52cfa968 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key
	I0520 04:32:04.418622   16966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/proxy-client.key
	I0520 04:32:04.418794   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895.pem (1338 bytes)
	W0520 04:32:04.418973   16966 certs.go:480] ignoring /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895_empty.pem, impossibly tiny 0 bytes
	I0520 04:32:04.418980   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 04:32:04.419001   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem (1078 bytes)
	I0520 04:32:04.419021   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem (1123 bytes)
	I0520 04:32:04.419040   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/key.pem (1679 bytes)
	I0520 04:32:04.419096   16966 certs.go:484] found cert: /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem (1708 bytes)
	I0520 04:32:04.419462   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 04:32:04.426448   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 04:32:04.433845   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 04:32:04.440861   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 04:32:04.447375   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 04:32:04.454049   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 04:32:04.462634   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 04:32:04.469972   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 04:32:04.476673   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/ssl/certs/148952.pem --> /usr/share/ca-certificates/148952.pem (1708 bytes)
	I0520 04:32:04.482976   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 04:32:04.490017   16966 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/14895.pem --> /usr/share/ca-certificates/14895.pem (1338 bytes)
	I0520 04:32:04.496555   16966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 04:32:04.501192   16966 ssh_runner.go:195] Run: openssl version
	I0520 04:32:04.502999   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148952.pem && ln -fs /usr/share/ca-certificates/148952.pem /etc/ssl/certs/148952.pem"
	I0520 04:32:04.506198   16966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148952.pem
	I0520 04:32:04.507693   16966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 11:16 /usr/share/ca-certificates/148952.pem
	I0520 04:32:04.507713   16966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148952.pem
	I0520 04:32:04.509701   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148952.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 04:32:04.512467   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 04:32:04.515266   16966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:32:04.516620   16966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:32:04.516641   16966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 04:32:04.518396   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 04:32:04.521576   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14895.pem && ln -fs /usr/share/ca-certificates/14895.pem /etc/ssl/certs/14895.pem"
	I0520 04:32:04.524439   16966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14895.pem
	I0520 04:32:04.525715   16966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 11:16 /usr/share/ca-certificates/14895.pem
	I0520 04:32:04.525735   16966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14895.pem
	I0520 04:32:04.527400   16966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14895.pem /etc/ssl/certs/51391683.0"
	I0520 04:32:04.530599   16966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 04:32:04.532502   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 04:32:04.534598   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 04:32:04.536504   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 04:32:04.538404   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 04:32:04.540124   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 04:32:04.541716   16966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 04:32:04.543476   16966 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-484000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53197 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-484000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 04:32:04.543547   16966 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:32:04.553184   16966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 04:32:04.556204   16966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 04:32:04.556211   16966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 04:32:04.556214   16966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 04:32:04.556234   16966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 04:32:04.558970   16966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 04:32:04.559259   16966 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-484000" does not appear in /Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:32:04.559353   16966 kubeconfig.go:62] /Users/jenkins/minikube-integration/18932-14402/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-484000" cluster setting kubeconfig missing "stopped-upgrade-484000" context setting]
	I0520 04:32:04.559572   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/kubeconfig: {Name:mk5af4624218472b4409997d6f105a56e728f278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:32:04.560030   16966 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1059a0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:32:04.560508   16966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 04:32:04.563098   16966 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-484000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 04:32:04.563104   16966 kubeadm.go:1154] stopping kube-system containers ...
	I0520 04:32:04.563139   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 04:32:04.578076   16966 docker.go:483] Stopping containers: [d3ea00f44c5d 003767b25f73 d34dd3433fb6 b050fc43c844 4e13cbe1f144 65fe0618401e 5fa2cd2b9667 c4002ed29331]
	I0520 04:32:04.578137   16966 ssh_runner.go:195] Run: docker stop d3ea00f44c5d 003767b25f73 d34dd3433fb6 b050fc43c844 4e13cbe1f144 65fe0618401e 5fa2cd2b9667 c4002ed29331
	I0520 04:32:04.588956   16966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 04:32:04.594445   16966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:32:04.597503   16966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:32:04.597512   16966 kubeadm.go:156] found existing configuration files:
	
	I0520 04:32:04.597533   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf
	I0520 04:32:04.599888   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:32:04.599911   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:32:04.602622   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf
	I0520 04:32:04.605576   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:32:04.605595   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:32:04.607898   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf
	I0520 04:32:04.610535   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:32:04.610555   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:32:04.613459   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf
	I0520 04:32:04.615691   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:32:04.615711   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:32:04.618494   16966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:32:04.621612   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:04.643045   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:05.196806   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:05.336931   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:05.359508   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 04:32:05.382457   16966 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:32:05.382542   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:05.884715   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:06.384600   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:32:06.388960   16966 api_server.go:72] duration metric: took 1.006517125s to wait for apiserver process to appear ...
	I0520 04:32:06.388968   16966 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:32:06.388977   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:11.391091   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:11.391236   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:16.391766   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:16.391854   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:21.392473   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:21.392521   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:26.393239   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:26.393287   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:31.394236   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:31.394307   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:36.395520   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:36.395565   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:41.397163   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:41.397217   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:46.399196   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:46.399216   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:51.401392   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:51.401437   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:32:56.403582   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:32:56.403609   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:01.405749   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:01.405780   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:06.407896   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:06.408126   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:06.420035   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:06.420112   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:06.430666   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:06.430739   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:06.441240   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:06.441305   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:06.454035   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:06.454099   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:06.464572   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:06.464635   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:06.475504   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:06.475576   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:06.485436   16966 logs.go:276] 0 containers: []
	W0520 04:33:06.485447   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:06.485513   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:06.501030   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:06.501059   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:06.501065   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:06.515600   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:06.515610   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:06.526875   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:06.526885   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:06.538000   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:06.538337   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:06.554252   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:06.554268   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:06.565785   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:06.565800   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:06.569862   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:06.569869   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:06.583545   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:06.583559   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:06.598252   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:06.598266   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:06.610033   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:06.610045   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:06.728647   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:06.728661   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:06.744735   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:06.744748   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:06.783424   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:06.783433   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:06.830620   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:06.830635   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:06.842203   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:06.842214   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:06.859701   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:06.859711   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:06.873978   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:06.873988   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:09.400609   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:14.402832   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:14.403023   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:14.418814   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:14.418916   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:14.431418   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:14.431490   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:14.442167   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:14.442259   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:14.452419   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:14.452502   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:14.462777   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:14.462853   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:14.473752   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:14.473832   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:14.486892   16966 logs.go:276] 0 containers: []
	W0520 04:33:14.486902   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:14.486955   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:14.497950   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:14.497968   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:14.497973   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:14.511933   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:14.511944   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:14.536255   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:14.536266   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:14.540233   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:14.540238   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:14.551803   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:14.551814   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:14.562742   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:14.562752   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:14.600412   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:14.600421   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:14.636640   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:14.636650   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:14.678033   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:14.678044   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:14.693586   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:14.693598   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:14.705287   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:14.705298   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:14.722883   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:14.722893   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:14.740951   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:14.740961   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:14.754677   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:14.754687   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:14.768326   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:14.768335   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:14.791164   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:14.791173   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:14.803274   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:14.803286   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:17.316634   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:22.317513   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:22.317811   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:22.339981   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:22.340082   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:22.356472   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:22.356561   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:22.369225   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:22.369296   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:22.380516   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:22.380599   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:22.390741   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:22.390815   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:22.401377   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:22.401445   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:22.411438   16966 logs.go:276] 0 containers: []
	W0520 04:33:22.411455   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:22.411511   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:22.422525   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:22.422543   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:22.422548   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:22.459243   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:22.459254   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:22.472717   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:22.472727   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:22.509041   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:22.509049   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:22.520064   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:22.520074   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:22.537361   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:22.537372   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:22.553295   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:22.553305   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:22.564771   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:22.564782   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:22.576192   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:22.576202   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:22.602327   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:22.602340   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:22.619186   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:22.619196   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:22.656708   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:22.656720   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:22.675596   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:22.675609   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:22.687145   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:22.687156   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:22.699388   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:22.699401   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:22.703568   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:22.703578   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:22.717930   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:22.717944   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:25.235494   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:30.237742   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:30.238084   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:30.279685   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:30.279788   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:30.297526   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:30.297602   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:30.310823   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:30.310893   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:30.322270   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:30.322338   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:30.333622   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:30.333685   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:30.344739   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:30.344809   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:30.355012   16966 logs.go:276] 0 containers: []
	W0520 04:33:30.355023   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:30.355076   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:30.365872   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:30.365891   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:30.365896   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:30.402236   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:30.402243   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:30.439977   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:30.439995   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:30.451918   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:30.451928   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:30.463621   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:30.463635   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:30.467767   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:30.467775   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:30.479375   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:30.479387   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:30.493412   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:30.493424   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:30.518375   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:30.518384   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:30.530400   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:30.530411   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:30.567888   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:30.567898   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:30.582280   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:30.582291   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:30.596700   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:30.596710   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:30.611619   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:30.611630   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:30.631794   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:30.631804   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:30.642542   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:30.642553   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:30.657105   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:30.657114   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:33.181976   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:38.184247   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:38.184480   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:38.208810   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:38.208924   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:38.224669   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:38.224746   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:38.237504   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:38.237578   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:38.249058   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:38.249130   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:38.259252   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:38.259315   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:38.270448   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:38.270519   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:38.281330   16966 logs.go:276] 0 containers: []
	W0520 04:33:38.281344   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:38.281405   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:38.294429   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:38.294448   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:38.294454   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:38.332999   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:38.333019   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:38.347572   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:38.347581   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:38.385851   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:38.385862   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:38.398019   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:38.398032   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:38.415340   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:38.415353   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:38.429152   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:38.429164   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:38.441226   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:38.441237   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:38.477579   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:38.477593   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:38.491367   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:38.491382   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:38.502397   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:38.502409   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:38.513290   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:38.513299   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:38.524102   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:38.524113   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:38.528306   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:38.528317   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:38.542735   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:38.542744   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:38.566401   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:38.566410   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:38.583923   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:38.583936   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:41.096389   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:46.098847   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:46.099069   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:46.116553   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:46.116642   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:46.130102   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:46.130177   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:46.141947   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:46.142016   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:46.154035   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:46.154105   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:46.164590   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:46.164651   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:46.175039   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:46.175104   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:46.185246   16966 logs.go:276] 0 containers: []
	W0520 04:33:46.185257   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:46.185307   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:46.195712   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:46.195746   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:46.195752   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:46.207155   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:46.207165   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:46.222411   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:46.222422   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:46.233652   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:46.233662   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:46.270184   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:46.270197   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:46.284217   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:46.284226   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:46.305503   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:46.305513   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:46.317541   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:46.317552   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:46.329669   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:46.329680   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:46.368356   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:46.368373   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:46.383965   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:46.383974   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:46.401714   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:46.401728   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:46.413559   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:46.413571   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:46.438539   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:46.438547   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:46.449801   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:46.449811   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:46.454065   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:46.454072   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:46.487815   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:46.487826   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:49.004509   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:33:54.006724   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:33:54.006891   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:33:54.026830   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:33:54.026925   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:33:54.042845   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:33:54.042914   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:33:54.055712   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:33:54.055793   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:33:54.069852   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:33:54.069918   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:33:54.080339   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:33:54.080413   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:33:54.091189   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:33:54.091249   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:33:54.100948   16966 logs.go:276] 0 containers: []
	W0520 04:33:54.100959   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:33:54.101009   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:33:54.111509   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:33:54.111530   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:33:54.111536   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:33:54.124229   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:33:54.124238   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:33:54.160562   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:33:54.160572   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:33:54.195935   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:33:54.195946   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:33:54.210063   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:33:54.210074   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:33:54.223773   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:33:54.223783   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:33:54.234697   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:33:54.234708   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:33:54.246863   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:33:54.246874   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:33:54.258393   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:33:54.258404   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:33:54.262840   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:33:54.262850   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:33:54.302910   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:33:54.302921   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:33:54.317614   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:33:54.317625   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:33:54.333512   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:33:54.333525   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:33:54.358366   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:33:54.358373   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:33:54.370115   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:33:54.370127   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:33:54.387518   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:33:54.387531   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:33:54.401204   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:33:54.401215   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:33:56.916473   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:01.918559   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:01.918944   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:01.954376   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:01.954515   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:01.976262   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:01.976363   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:01.990806   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:01.990889   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:02.003532   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:02.003602   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:02.016345   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:02.016420   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:02.033077   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:02.033148   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:02.043070   16966 logs.go:276] 0 containers: []
	W0520 04:34:02.043081   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:02.043131   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:02.054664   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:02.054683   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:02.054688   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:02.092160   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:02.092167   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:02.106109   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:02.106120   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:02.120257   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:02.120269   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:02.132766   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:02.132777   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:02.168768   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:02.168781   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:02.180746   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:02.180756   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:02.194433   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:02.194443   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:02.199178   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:02.199192   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:02.211344   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:02.211355   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:02.222949   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:02.222958   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:02.247885   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:02.247894   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:02.262160   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:02.262173   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:02.300145   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:02.300156   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:02.316269   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:02.316280   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:02.328363   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:02.328375   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:02.348667   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:02.348677   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:04.862378   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:09.864723   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:09.864897   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:09.877810   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:09.877884   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:09.888334   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:09.888398   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:09.899203   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:09.899271   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:09.915379   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:09.915452   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:09.926039   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:09.926112   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:09.936959   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:09.937024   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:09.950110   16966 logs.go:276] 0 containers: []
	W0520 04:34:09.950121   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:09.950189   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:09.960661   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:09.960678   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:09.960683   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:09.975677   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:09.975688   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:09.990011   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:09.990020   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:10.007280   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:10.007293   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:10.042291   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:10.042301   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:10.085119   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:10.085130   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:10.096421   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:10.096434   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:10.108655   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:10.108665   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:10.133147   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:10.133157   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:10.137212   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:10.137217   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:10.151409   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:10.151418   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:10.169114   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:10.169125   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:10.180466   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:10.180478   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:10.192077   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:10.192088   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:10.229179   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:10.229191   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:10.243173   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:10.243188   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:10.256750   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:10.256762   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:12.774413   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:17.777151   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:17.777527   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:17.815716   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:17.815854   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:17.836034   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:17.836149   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:17.850826   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:17.850902   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:17.862997   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:17.863064   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:17.873572   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:17.873642   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:17.884196   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:17.884262   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:17.899164   16966 logs.go:276] 0 containers: []
	W0520 04:34:17.899175   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:17.899229   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:17.910068   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:17.910085   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:17.910090   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:17.922623   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:17.922634   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:17.939917   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:17.939927   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:17.976538   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:17.976545   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:17.980895   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:17.980904   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:17.994975   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:17.994987   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:18.014299   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:18.014310   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:18.034815   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:18.034825   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:18.049966   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:18.049976   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:18.062663   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:18.062672   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:18.082183   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:18.082194   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:18.094397   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:18.094408   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:18.136531   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:18.136541   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:18.173711   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:18.173721   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:18.197534   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:18.197541   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:18.211822   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:18.211831   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:18.226695   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:18.226705   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:20.745572   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:25.747759   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:25.747994   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:25.770460   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:25.770544   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:25.785185   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:25.785251   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:25.803106   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:25.803178   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:25.813803   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:25.813865   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:25.824167   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:25.824223   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:25.834348   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:25.834406   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:25.844233   16966 logs.go:276] 0 containers: []
	W0520 04:34:25.844246   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:25.844301   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:25.854912   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:25.854932   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:25.854936   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:25.891938   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:25.891951   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:25.903854   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:25.903863   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:25.916199   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:25.916210   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:25.927814   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:25.927826   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:25.952770   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:25.952778   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:25.990755   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:25.990767   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:26.005174   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:26.005184   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:26.016971   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:26.016982   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:26.035210   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:26.035219   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:26.048744   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:26.048757   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:26.053329   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:26.053335   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:26.067781   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:26.067791   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:26.082847   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:26.082856   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:26.096947   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:26.096962   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:26.108718   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:26.108729   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:26.120577   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:26.120587   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:28.658559   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:33.660687   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:33.660830   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:33.683445   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:33.683513   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:33.694464   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:33.694534   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:33.705006   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:33.705078   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:33.715911   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:33.715980   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:33.730371   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:33.730434   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:33.740958   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:33.741023   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:33.751104   16966 logs.go:276] 0 containers: []
	W0520 04:34:33.751114   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:33.751169   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:33.761450   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:33.761466   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:33.761471   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:33.775996   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:33.776005   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:33.789809   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:33.789821   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:33.801799   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:33.801810   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:33.840028   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:33.840036   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:33.852181   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:33.852191   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:33.869215   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:33.869225   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:33.880723   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:33.880733   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:33.894800   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:33.894810   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:33.906233   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:33.906245   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:33.921139   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:33.921149   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:33.946987   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:33.946994   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:33.950889   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:33.950894   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:33.989025   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:33.989036   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:34.027449   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:34.027460   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:34.041101   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:34.041111   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:34.052554   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:34.052566   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:36.569423   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:41.570607   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:41.570822   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:41.588649   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:41.588738   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:41.603028   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:41.603104   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:41.613908   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:41.613971   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:41.624176   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:41.624239   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:41.638361   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:41.638431   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:41.649391   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:41.649451   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:41.663745   16966 logs.go:276] 0 containers: []
	W0520 04:34:41.663757   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:41.663817   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:41.674471   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:41.674490   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:41.674495   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:41.699141   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:41.699152   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:41.736302   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:41.736309   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:41.740805   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:41.740811   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:41.754705   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:41.754715   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:41.772196   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:41.772206   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:41.785985   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:41.786000   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:41.797533   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:41.797547   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:41.837908   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:41.837918   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:41.849488   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:41.849501   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:41.868044   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:41.868054   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:41.884331   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:41.884341   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:41.899684   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:41.899700   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:41.914657   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:41.914670   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:41.926738   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:41.926749   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:41.961259   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:41.961268   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:41.975490   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:41.975501   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:44.490307   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:49.492453   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:49.492667   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:49.510625   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:49.510707   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:49.527390   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:49.527460   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:49.537798   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:49.537869   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:49.548082   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:49.548156   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:49.566275   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:49.566343   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:49.576639   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:49.576708   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:49.590685   16966 logs.go:276] 0 containers: []
	W0520 04:34:49.590697   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:49.590748   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:49.602352   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:49.602376   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:49.602382   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:49.607207   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:49.607219   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:49.645335   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:49.645347   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:49.659669   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:49.659684   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:49.696627   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:49.696638   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:49.714831   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:49.714844   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:49.726985   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:49.726998   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:49.765174   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:49.765182   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:49.776383   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:49.776394   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:49.787925   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:49.787936   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:49.805268   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:49.805281   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:49.819115   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:49.819129   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:49.833752   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:49.833761   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:49.848023   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:49.848032   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:49.859113   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:49.859123   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:49.874805   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:49.874817   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:49.886145   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:49.886163   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:34:52.410852   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:34:57.412984   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:34:57.413077   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:34:57.424550   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:34:57.424632   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:34:57.435495   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:34:57.435568   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:34:57.449980   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:34:57.450049   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:34:57.460855   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:34:57.460920   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:34:57.471899   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:34:57.471975   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:34:57.482404   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:34:57.482471   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:34:57.495445   16966 logs.go:276] 0 containers: []
	W0520 04:34:57.495457   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:34:57.495515   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:34:57.505503   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:34:57.505519   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:34:57.505524   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:34:57.541969   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:34:57.541977   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:34:57.552477   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:34:57.552487   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:34:57.564041   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:34:57.564055   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:34:57.576089   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:34:57.576104   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:34:57.588013   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:34:57.588024   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:34:57.623299   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:34:57.623310   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:34:57.637820   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:34:57.637830   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:34:57.653987   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:34:57.653998   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:34:57.666109   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:34:57.666120   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:34:57.680497   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:34:57.680510   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:34:57.698761   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:34:57.698772   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:34:57.703116   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:34:57.703123   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:34:57.745678   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:34:57.745689   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:34:57.763426   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:34:57.763437   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:34:57.778231   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:34:57.778241   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:34:57.795605   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:34:57.795615   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:00.320807   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:05.322434   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:05.322579   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:05.333392   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:05.333472   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:05.344279   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:05.344345   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:05.356962   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:05.357035   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:05.370411   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:05.370480   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:05.384764   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:05.384837   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:05.395656   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:05.395720   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:05.407724   16966 logs.go:276] 0 containers: []
	W0520 04:35:05.407737   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:05.407798   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:05.421929   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:05.421947   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:05.421953   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:05.426286   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:05.426292   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:05.463682   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:05.463693   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:05.478203   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:05.478213   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:05.494664   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:05.494673   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:05.509751   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:05.509761   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:05.524458   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:05.524468   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:05.539577   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:05.539586   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:05.550946   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:05.550955   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:05.573502   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:05.573509   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:05.610807   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:05.610815   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:05.645478   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:05.645488   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:05.657106   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:05.657116   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:05.670729   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:05.670739   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:05.683356   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:05.683366   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:05.698949   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:05.698960   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:05.710669   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:05.710680   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:08.231507   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:13.233794   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:13.234013   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:13.257303   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:13.257396   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:13.273110   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:13.273192   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:13.294439   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:13.294509   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:13.305041   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:13.305113   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:13.315667   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:13.315741   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:13.327104   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:13.327177   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:13.337384   16966 logs.go:276] 0 containers: []
	W0520 04:35:13.337395   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:13.337451   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:13.348652   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:13.348671   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:13.348676   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:13.363944   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:13.363954   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:13.377580   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:13.377589   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:13.390455   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:13.390466   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:13.413095   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:13.413106   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:13.424547   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:13.424559   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:13.437161   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:13.437173   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:13.448674   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:13.448684   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:13.484678   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:13.484688   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:13.499290   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:13.499300   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:13.511409   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:13.511420   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:13.527557   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:13.527567   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:13.545587   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:13.545596   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:13.557152   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:13.557162   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:13.594142   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:13.594150   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:13.598155   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:13.598164   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:13.634740   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:13.634749   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:16.150095   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:21.152354   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:21.152584   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:21.167481   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:21.167565   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:21.179330   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:21.179391   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:21.190962   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:21.191030   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:21.201696   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:21.201764   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:21.212384   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:21.212450   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:21.222456   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:21.222519   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:21.232045   16966 logs.go:276] 0 containers: []
	W0520 04:35:21.232058   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:21.232115   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:21.242641   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:21.242670   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:21.242675   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:21.256876   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:21.256887   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:21.268463   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:21.268477   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:21.280242   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:21.280255   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:21.284561   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:21.284570   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:21.301049   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:21.301064   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:21.318728   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:21.318738   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:21.341706   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:21.341714   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:21.361013   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:21.361022   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:21.372710   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:21.372719   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:21.407727   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:21.407737   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:21.422034   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:21.422044   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:21.433448   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:21.433463   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:21.445268   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:21.445278   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:21.481873   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:21.481890   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:21.520427   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:21.520438   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:21.534829   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:21.534838   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:24.053074   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:29.055331   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:29.055496   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:29.071352   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:29.071442   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:29.084369   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:29.084445   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:29.095371   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:29.095441   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:29.106224   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:29.106304   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:29.117187   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:29.117256   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:29.127474   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:29.127551   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:29.140246   16966 logs.go:276] 0 containers: []
	W0520 04:35:29.140259   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:29.140320   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:29.151048   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:29.151069   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:29.151075   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:29.162911   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:29.162924   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:29.167649   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:29.167658   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:29.182147   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:29.182161   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:29.200635   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:29.200648   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:29.218189   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:29.218203   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:29.230499   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:29.230513   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:29.241896   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:29.241906   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:29.265507   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:29.265514   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:29.278887   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:29.278897   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:29.292021   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:29.292031   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:29.331109   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:29.331126   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:29.368221   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:29.368236   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:29.405752   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:29.405763   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:29.417911   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:29.417923   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:29.436967   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:29.436980   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:29.451663   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:29.451672   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:31.965716   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:36.967680   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:36.967850   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:36.980767   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:36.980847   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:36.992103   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:36.992216   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:37.002378   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:37.002447   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:37.012620   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:37.012686   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:37.022767   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:37.022834   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:37.033375   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:37.033442   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:37.043817   16966 logs.go:276] 0 containers: []
	W0520 04:35:37.043829   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:37.043888   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:37.053836   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:37.053855   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:37.053860   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:37.076183   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:37.076190   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:37.089797   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:37.089807   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:37.104705   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:37.104715   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:37.116642   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:37.116654   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:37.137043   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:37.137055   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:37.152501   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:37.152514   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:37.163425   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:37.163439   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:37.197838   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:37.197851   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:37.202591   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:37.202600   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:37.214698   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:37.214713   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:37.232478   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:37.232487   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:37.244364   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:37.244375   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:37.280578   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:37.280586   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:37.298912   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:37.298926   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:37.310237   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:37.310247   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:37.324781   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:37.324791   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:39.865471   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:44.867819   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:44.868153   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:44.905883   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:44.906024   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:44.926490   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:44.926586   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:44.943510   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:44.943592   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:44.955950   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:44.956024   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:44.966877   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:44.966945   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:44.978783   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:44.982239   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:44.993033   16966 logs.go:276] 0 containers: []
	W0520 04:35:44.993044   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:44.993095   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:45.003809   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:45.003827   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:45.003832   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:45.042545   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:45.042554   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:45.055139   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:45.055151   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:45.072558   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:45.072568   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:45.108415   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:45.108427   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:45.123572   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:45.123582   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:45.162506   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:45.162517   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:45.177433   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:45.177442   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:45.192315   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:45.192324   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:45.206181   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:45.206191   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:45.218197   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:45.218208   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:45.229763   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:45.229773   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:45.234285   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:45.234291   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:45.246050   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:45.246063   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:45.264490   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:45.264502   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:45.287599   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:45.287611   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:45.299277   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:45.299287   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:47.812870   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:35:52.815158   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:35:52.815312   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:35:52.825850   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:35:52.825921   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:35:52.836237   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:35:52.836305   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:35:52.846413   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:35:52.846471   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:35:52.857148   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:35:52.857217   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:35:52.875609   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:35:52.875674   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:35:52.885892   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:35:52.885955   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:35:52.896283   16966 logs.go:276] 0 containers: []
	W0520 04:35:52.896296   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:35:52.896346   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:35:52.910045   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:35:52.910064   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:35:52.910069   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:35:52.921924   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:35:52.921934   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:35:52.933653   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:35:52.933664   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:35:52.946032   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:35:52.946043   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:35:52.950732   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:35:52.950741   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:35:52.968611   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:35:52.968621   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:35:52.979911   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:35:52.979921   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:35:52.991109   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:35:52.991119   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:35:53.024168   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:35:53.024181   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:35:53.038033   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:35:53.038042   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:35:53.053506   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:35:53.053518   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:35:53.091334   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:35:53.091344   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:35:53.132623   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:35:53.132642   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:35:53.148253   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:35:53.148268   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:35:53.162218   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:35:53.162227   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:35:53.184537   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:35:53.184545   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:35:53.197932   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:35:53.197942   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:35:55.714122   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:00.714437   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:00.714594   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:36:00.727584   16966 logs.go:276] 2 containers: [1142a17e4c81 4e13cbe1f144]
	I0520 04:36:00.727665   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:36:00.742945   16966 logs.go:276] 2 containers: [0b620e142eb2 d34dd3433fb6]
	I0520 04:36:00.743018   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:36:00.753727   16966 logs.go:276] 1 containers: [ec6c6ab96367]
	I0520 04:36:00.753793   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:36:00.764119   16966 logs.go:276] 2 containers: [cfb58db8ddce 003767b25f73]
	I0520 04:36:00.764185   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:36:00.778194   16966 logs.go:276] 1 containers: [ad77d8a20b39]
	I0520 04:36:00.778258   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:36:00.793472   16966 logs.go:276] 2 containers: [948f68cda1d0 d3ea00f44c5d]
	I0520 04:36:00.793540   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:36:00.803977   16966 logs.go:276] 0 containers: []
	W0520 04:36:00.803989   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:36:00.804045   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:36:00.814759   16966 logs.go:276] 2 containers: [9f2980a4fc41 1fc7b65370e3]
	I0520 04:36:00.814777   16966 logs.go:123] Gathering logs for kube-scheduler [cfb58db8ddce] ...
	I0520 04:36:00.814782   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfb58db8ddce"
	I0520 04:36:00.826510   16966 logs.go:123] Gathering logs for kube-scheduler [003767b25f73] ...
	I0520 04:36:00.826521   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 003767b25f73"
	I0520 04:36:00.841888   16966 logs.go:123] Gathering logs for kube-controller-manager [948f68cda1d0] ...
	I0520 04:36:00.841901   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 948f68cda1d0"
	I0520 04:36:00.861953   16966 logs.go:123] Gathering logs for storage-provisioner [9f2980a4fc41] ...
	I0520 04:36:00.861966   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f2980a4fc41"
	I0520 04:36:00.873683   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:36:00.873693   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:36:00.896215   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:36:00.896221   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:36:00.900583   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:36:00.900591   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:36:00.936969   16966 logs.go:123] Gathering logs for etcd [0b620e142eb2] ...
	I0520 04:36:00.936983   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b620e142eb2"
	I0520 04:36:00.952814   16966 logs.go:123] Gathering logs for coredns [ec6c6ab96367] ...
	I0520 04:36:00.952824   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec6c6ab96367"
	I0520 04:36:00.964946   16966 logs.go:123] Gathering logs for storage-provisioner [1fc7b65370e3] ...
	I0520 04:36:00.964956   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fc7b65370e3"
	I0520 04:36:00.985240   16966 logs.go:123] Gathering logs for kube-apiserver [1142a17e4c81] ...
	I0520 04:36:00.985250   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1142a17e4c81"
	I0520 04:36:01.006666   16966 logs.go:123] Gathering logs for kube-apiserver [4e13cbe1f144] ...
	I0520 04:36:01.006675   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e13cbe1f144"
	I0520 04:36:01.046409   16966 logs.go:123] Gathering logs for etcd [d34dd3433fb6] ...
	I0520 04:36:01.046419   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d34dd3433fb6"
	I0520 04:36:01.063297   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:36:01.063309   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:36:01.100786   16966 logs.go:123] Gathering logs for kube-proxy [ad77d8a20b39] ...
	I0520 04:36:01.100795   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad77d8a20b39"
	I0520 04:36:01.113311   16966 logs.go:123] Gathering logs for kube-controller-manager [d3ea00f44c5d] ...
	I0520 04:36:01.113321   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3ea00f44c5d"
	I0520 04:36:01.130878   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:36:01.130889   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:36:03.645027   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:08.647213   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:08.647317   16966 kubeadm.go:591] duration metric: took 4m4.094137667s to restartPrimaryControlPlane
	W0520 04:36:08.647415   16966 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 04:36:08.647453   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 04:36:09.732011   16966 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.084688417s)
	I0520 04:36:09.732078   16966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 04:36:09.737256   16966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 04:36:09.740196   16966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 04:36:09.743153   16966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 04:36:09.743160   16966 kubeadm.go:156] found existing configuration files:
	
	I0520 04:36:09.743183   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf
	I0520 04:36:09.745683   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 04:36:09.745705   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 04:36:09.748440   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf
	I0520 04:36:09.751588   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 04:36:09.751611   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 04:36:09.754263   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf
	I0520 04:36:09.756664   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 04:36:09.756687   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 04:36:09.759815   16966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf
	I0520 04:36:09.762667   16966 kubeadm.go:162] "https://control-plane.minikube.internal:53197" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53197 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 04:36:09.762687   16966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 04:36:09.765404   16966 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 04:36:09.782442   16966 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 04:36:09.782476   16966 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 04:36:09.833059   16966 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 04:36:09.833123   16966 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 04:36:09.833187   16966 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 04:36:09.889956   16966 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 04:36:09.899172   16966 out.go:204]   - Generating certificates and keys ...
	I0520 04:36:09.899206   16966 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 04:36:09.899238   16966 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 04:36:09.899277   16966 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 04:36:09.899310   16966 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 04:36:09.899351   16966 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 04:36:09.899385   16966 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 04:36:09.899419   16966 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 04:36:09.899454   16966 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 04:36:09.899487   16966 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 04:36:09.899527   16966 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 04:36:09.899544   16966 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 04:36:09.899577   16966 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 04:36:09.959701   16966 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 04:36:10.090773   16966 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 04:36:10.155744   16966 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 04:36:10.288105   16966 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 04:36:10.319031   16966 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 04:36:10.319575   16966 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 04:36:10.319703   16966 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 04:36:10.396449   16966 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 04:36:10.400188   16966 out.go:204]   - Booting up control plane ...
	I0520 04:36:10.400231   16966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 04:36:10.400266   16966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 04:36:10.400302   16966 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 04:36:10.401285   16966 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 04:36:10.401368   16966 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 04:36:14.905322   16966 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.504088 seconds
	I0520 04:36:14.905425   16966 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 04:36:14.911673   16966 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 04:36:15.419966   16966 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 04:36:15.420101   16966 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-484000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 04:36:15.930638   16966 kubeadm.go:309] [bootstrap-token] Using token: ew4xpo.mbfk0gq3vr62cx5o
	I0520 04:36:15.937237   16966 out.go:204]   - Configuring RBAC rules ...
	I0520 04:36:15.937344   16966 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 04:36:15.937433   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 04:36:15.940412   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 04:36:15.944044   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 04:36:15.945673   16966 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 04:36:15.947226   16966 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 04:36:15.955603   16966 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 04:36:16.152673   16966 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 04:36:16.335836   16966 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 04:36:16.336217   16966 kubeadm.go:309] 
	I0520 04:36:16.336247   16966 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 04:36:16.336252   16966 kubeadm.go:309] 
	I0520 04:36:16.336312   16966 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 04:36:16.336317   16966 kubeadm.go:309] 
	I0520 04:36:16.336328   16966 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 04:36:16.336354   16966 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 04:36:16.336471   16966 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 04:36:16.336475   16966 kubeadm.go:309] 
	I0520 04:36:16.336501   16966 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 04:36:16.336507   16966 kubeadm.go:309] 
	I0520 04:36:16.336538   16966 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 04:36:16.336543   16966 kubeadm.go:309] 
	I0520 04:36:16.336603   16966 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 04:36:16.336646   16966 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 04:36:16.336687   16966 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 04:36:16.336692   16966 kubeadm.go:309] 
	I0520 04:36:16.336750   16966 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 04:36:16.336789   16966 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 04:36:16.336800   16966 kubeadm.go:309] 
	I0520 04:36:16.336841   16966 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ew4xpo.mbfk0gq3vr62cx5o \
	I0520 04:36:16.336892   16966 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 \
	I0520 04:36:16.336908   16966 kubeadm.go:309] 	--control-plane 
	I0520 04:36:16.336911   16966 kubeadm.go:309] 
	I0520 04:36:16.336970   16966 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 04:36:16.336979   16966 kubeadm.go:309] 
	I0520 04:36:16.337017   16966 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ew4xpo.mbfk0gq3vr62cx5o \
	I0520 04:36:16.337073   16966 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:ca9ec03f82f66153a35a2ecc2d03f5f208d679a7d86a5a796efdea90c63b3696 
	I0520 04:36:16.337137   16966 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 04:36:16.337189   16966 cni.go:84] Creating CNI manager for ""
	I0520 04:36:16.337198   16966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:36:16.340654   16966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 04:36:16.343683   16966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 04:36:16.346923   16966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 04:36:16.352026   16966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 04:36:16.352091   16966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 04:36:16.352105   16966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-484000 minikube.k8s.io/updated_at=2024_05_20T04_36_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0881f46f23ff5e176004d1ef4d2b1cc347d248bb minikube.k8s.io/name=stopped-upgrade-484000 minikube.k8s.io/primary=true
	I0520 04:36:16.355388   16966 ops.go:34] apiserver oom_adj: -16
	I0520 04:36:16.393831   16966 kubeadm.go:1107] duration metric: took 41.776667ms to wait for elevateKubeSystemPrivileges
	W0520 04:36:16.393855   16966 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 04:36:16.393858   16966 kubeadm.go:393] duration metric: took 4m11.854282959s to StartCluster
	I0520 04:36:16.393867   16966 settings.go:142] acquiring lock: {Name:mkfc25767ac77ec7e329af7eb025d278b3830db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:36:16.393953   16966 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:36:16.394369   16966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/kubeconfig: {Name:mk5af4624218472b4409997d6f105a56e728f278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:36:16.394577   16966 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:36:16.398627   16966 out.go:177] * Verifying Kubernetes components...
	I0520 04:36:16.394585   16966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 04:36:16.394661   16966 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:36:16.406496   16966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 04:36:16.406505   16966 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-484000"
	I0520 04:36:16.406517   16966 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-484000"
	W0520 04:36:16.406521   16966 addons.go:243] addon storage-provisioner should already be in state true
	I0520 04:36:16.406529   16966 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-484000"
	I0520 04:36:16.406533   16966 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0520 04:36:16.406539   16966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-484000"
	I0520 04:36:16.407040   16966 retry.go:31] will retry after 1.205385154s: connect: dial unix /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/monitor: connect: connection refused
	I0520 04:36:16.411575   16966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 04:36:16.415704   16966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:36:16.415711   16966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 04:36:16.415718   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:36:16.500799   16966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 04:36:16.506051   16966 api_server.go:52] waiting for apiserver process to appear ...
	I0520 04:36:16.506092   16966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 04:36:16.509786   16966 api_server.go:72] duration metric: took 115.209ms to wait for apiserver process to appear ...
	I0520 04:36:16.509794   16966 api_server.go:88] waiting for apiserver healthz status ...
	I0520 04:36:16.509800   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:16.548935   16966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 04:36:17.615500   16966 kapi.go:59] client config for stopped-upgrade-484000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/stopped-upgrade-484000/client.key", CAFile:"/Users/jenkins/minikube-integration/18932-14402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1059a0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 04:36:17.615645   16966 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-484000"
	W0520 04:36:17.615654   16966 addons.go:243] addon default-storageclass should already be in state true
	I0520 04:36:17.615668   16966 host.go:66] Checking if "stopped-upgrade-484000" exists ...
	I0520 04:36:17.616500   16966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 04:36:17.616507   16966 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 04:36:17.616513   16966 sshutil.go:53] new ssh client: &{IP:localhost Port:53162 SSHKeyPath:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/stopped-upgrade-484000/id_rsa Username:docker}
	I0520 04:36:17.649971   16966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 04:36:21.510814   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:21.510885   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:26.511184   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:26.511206   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:31.511167   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:31.511216   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:36.511565   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:36.511618   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:41.512247   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:41.512288   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:46.512842   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:46.512884   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 04:36:47.758364   16966 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 04:36:47.761517   16966 out.go:177] * Enabled addons: storage-provisioner
	I0520 04:36:47.772351   16966 addons.go:505] duration metric: took 31.379177291s for enable addons: enabled=[storage-provisioner]
	I0520 04:36:51.513644   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:51.513686   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:36:56.514649   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:36:56.514671   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:01.515858   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:01.515883   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:06.517668   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:06.517699   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:11.519844   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:11.519890   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:16.522239   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:16.522566   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:37:16.563964   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:37:16.564039   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:37:16.581757   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:37:16.581815   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:37:16.592284   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:37:16.592342   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:37:16.602247   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:37:16.602315   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:37:16.613441   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:37:16.613502   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:37:16.625135   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:37:16.625193   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:37:16.635295   16966 logs.go:276] 0 containers: []
	W0520 04:37:16.635312   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:37:16.635358   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:37:16.645496   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:37:16.645510   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:37:16.645514   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:37:16.660757   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:37:16.660771   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:37:16.684612   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:37:16.684620   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:37:16.695597   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:37:16.695613   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:37:16.729039   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:37:16.729048   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:37:16.763849   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:37:16.763865   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:37:16.778256   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:37:16.778269   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:37:16.789359   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:37:16.789369   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:37:16.807492   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:37:16.807506   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:37:16.819249   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:37:16.819262   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:37:16.823917   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:37:16.823924   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:37:16.838794   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:37:16.838808   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:37:16.850442   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:37:16.850456   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:37:19.363625   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:24.365834   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:24.366081   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:37:24.393431   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:37:24.393610   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:37:24.410539   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:37:24.410628   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:37:24.423815   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:37:24.423883   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:37:24.435296   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:37:24.435365   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:37:24.447451   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:37:24.447521   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:37:24.458063   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:37:24.458130   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:37:24.468265   16966 logs.go:276] 0 containers: []
	W0520 04:37:24.468277   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:37:24.468326   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:37:24.478468   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:37:24.478481   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:37:24.478487   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:37:24.490450   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:37:24.490463   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:37:24.501798   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:37:24.501811   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:37:24.516435   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:37:24.516447   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:37:24.530575   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:37:24.530585   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:37:24.547194   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:37:24.547204   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:37:24.558723   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:37:24.558733   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:37:24.569910   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:37:24.569919   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:37:24.592891   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:37:24.592899   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:37:24.625871   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:37:24.625877   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:37:24.629856   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:37:24.629864   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:37:24.664105   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:37:24.664117   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:37:24.678467   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:37:24.678479   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:37:27.194002   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:32.196817   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:32.197196   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:37:32.246838   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:37:32.246968   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:37:32.266433   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:37:32.266517   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:37:32.280237   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:37:32.280304   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:37:32.291958   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:37:32.292018   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:37:32.302193   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:37:32.302255   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:37:32.312936   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:37:32.312995   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:37:32.322852   16966 logs.go:276] 0 containers: []
	W0520 04:37:32.322862   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:37:32.322910   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:37:32.333271   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:37:32.333284   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:37:32.333289   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:37:32.344047   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:37:32.344060   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:37:32.377230   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:37:32.377237   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:37:32.381108   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:37:32.381117   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:37:32.393198   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:37:32.393209   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:37:32.411476   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:37:32.411505   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:37:32.435433   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:37:32.435441   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:37:32.447854   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:37:32.447866   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:37:32.459026   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:37:32.459039   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:37:32.492690   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:37:32.492700   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:37:32.506873   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:37:32.506886   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:37:32.520934   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:37:32.520949   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:37:32.532143   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:37:32.532153   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:37:35.048754   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:40.051988   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:40.052428   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:37:40.089081   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:37:40.089201   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:37:40.110913   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:37:40.111024   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:37:40.127397   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:37:40.127466   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:37:40.139730   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:37:40.139797   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:37:40.150441   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:37:40.150504   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:37:40.161285   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:37:40.161344   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:37:40.170785   16966 logs.go:276] 0 containers: []
	W0520 04:37:40.170800   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:37:40.170851   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:37:40.180901   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:37:40.180914   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:37:40.180918   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:37:40.192225   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:37:40.192237   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:37:40.203315   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:37:40.203329   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:37:40.209216   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:37:40.209226   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:37:40.247514   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:37:40.247528   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:37:40.261984   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:37:40.261996   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:37:40.279305   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:37:40.279321   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:37:40.291129   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:37:40.291140   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:37:40.308270   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:37:40.308279   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:37:40.319755   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:37:40.319767   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:37:40.343118   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:37:40.343124   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:37:40.376031   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:37:40.376038   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:37:40.393108   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:37:40.393120   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:37:42.905928   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:47.908387   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:47.908782   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:37:47.950262   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:37:47.950381   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:37:47.970695   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:37:47.970782   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:37:47.984582   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:37:47.984643   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:37:47.996429   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:37:47.996484   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:37:48.007208   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:37:48.007281   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:37:48.017730   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:37:48.017794   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:37:48.031868   16966 logs.go:276] 0 containers: []
	W0520 04:37:48.031884   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:37:48.031940   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:37:48.043036   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:37:48.043052   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:37:48.043057   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:37:48.061034   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:37:48.061047   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:37:48.095608   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:37:48.095615   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:37:48.099881   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:37:48.099888   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:37:48.113234   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:37:48.113244   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:37:48.124342   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:37:48.124352   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:37:48.135814   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:37:48.135823   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:37:48.150676   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:37:48.150686   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:37:48.162508   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:37:48.162521   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:37:48.177376   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:37:48.177385   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:37:48.201421   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:37:48.201431   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:37:48.236678   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:37:48.236689   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:37:48.255589   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:37:48.255598   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:37:50.769032   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:37:55.771828   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:37:55.772280   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:37:55.814403   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:37:55.814589   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:37:55.836768   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:37:55.836872   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:37:55.852258   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:37:55.852347   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:37:55.864412   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:37:55.864484   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:37:55.875202   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:37:55.875262   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:37:55.886037   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:37:55.886105   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:37:55.896723   16966 logs.go:276] 0 containers: []
	W0520 04:37:55.896734   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:37:55.896785   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:37:55.907353   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:37:55.907370   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:37:55.907375   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:37:55.919083   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:37:55.919094   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:37:55.923592   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:37:55.923598   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:37:55.968372   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:37:55.968382   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:37:55.983316   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:37:55.983326   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:37:55.997364   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:37:55.997376   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:37:56.009227   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:37:56.009240   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:37:56.024235   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:37:56.024247   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:37:56.035918   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:37:56.035932   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:37:56.068830   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:37:56.068836   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:37:56.084779   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:37:56.084789   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:37:56.096634   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:37:56.096647   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:37:56.115434   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:37:56.115444   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:37:58.641929   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:03.644351   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:03.644838   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:03.683261   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:03.683394   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:03.704594   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:03.704707   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:03.719752   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:38:03.719819   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:03.731537   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:03.731601   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:03.742848   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:03.742906   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:03.752976   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:03.753039   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:03.762863   16966 logs.go:276] 0 containers: []
	W0520 04:38:03.762874   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:03.762927   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:03.777848   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:03.777866   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:03.777871   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:03.789646   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:03.789658   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:03.794148   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:03.794156   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:38:03.807908   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:03.807918   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:03.819569   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:03.819580   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:03.852566   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:03.852579   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:03.870045   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:03.870055   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:03.894797   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:03.894805   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:03.928950   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:03.928956   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:03.963353   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:03.963365   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:03.978454   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:03.978465   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:03.990286   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:03.990298   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:04.005188   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:04.005197   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:06.518959   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:11.521185   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:11.521390   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:11.541375   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:11.541464   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:11.555145   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:11.555216   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:11.566493   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:38:11.566551   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:11.576564   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:11.576632   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:11.586639   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:11.586703   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:11.600719   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:11.600781   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:11.610359   16966 logs.go:276] 0 containers: []
	W0520 04:38:11.610373   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:11.610432   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:11.624140   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:11.624155   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:11.624160   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:38:11.638264   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:11.638274   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:11.653384   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:11.653396   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:11.671660   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:11.671671   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:11.682986   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:11.682996   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:11.707106   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:11.707113   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:11.741061   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:11.741067   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:11.775531   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:11.775541   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:11.786865   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:11.786878   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:11.805297   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:11.805307   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:11.820258   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:11.820271   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:11.831942   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:11.831952   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:11.837105   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:11.837113   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:14.353115   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:19.355766   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:19.355996   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:19.377846   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:19.377944   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:19.392726   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:19.392789   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:19.405296   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:38:19.405352   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:19.416104   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:19.416173   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:19.426444   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:19.426509   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:19.436687   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:19.436756   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:19.446853   16966 logs.go:276] 0 containers: []
	W0520 04:38:19.446865   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:19.446918   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:19.457450   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:19.457467   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:19.457472   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:19.469023   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:19.469033   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:19.481069   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:19.481082   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:19.499736   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:19.499745   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:19.503876   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:19.503883   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:19.539004   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:19.539016   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:19.553613   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:19.553625   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:38:19.567542   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:19.567552   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:19.579007   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:19.579020   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:19.590613   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:19.590622   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:19.607973   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:19.607983   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:19.642041   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:19.642048   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:19.666222   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:19.666229   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:22.179743   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:27.182452   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:27.182925   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:27.222985   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:27.223124   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:27.248943   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:27.249053   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:27.265101   16966 logs.go:276] 2 containers: [e63043bb5ff5 b288581dbbdf]
	I0520 04:38:27.265177   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:27.280651   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:27.280717   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:27.291319   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:27.291390   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:27.302045   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:27.302104   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:27.312882   16966 logs.go:276] 0 containers: []
	W0520 04:38:27.312893   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:27.312940   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:27.324673   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:27.324688   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:27.324694   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:27.339713   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:27.339722   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:27.374304   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:27.374315   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:27.386108   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:27.386119   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:27.400733   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:27.400745   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:38:27.414874   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:27.414886   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:27.426864   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:27.426873   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:27.438222   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:27.438231   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:27.459014   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:27.459022   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:27.470351   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:27.470363   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:27.505330   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:27.505339   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:27.510016   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:27.510023   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:27.534118   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:27.534124   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:30.047207   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:35.049882   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:35.101695   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:35.116734   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:35.116808   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:35.132873   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:35.132941   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:35.144094   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:38:35.144165   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:35.155119   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:35.155186   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:35.166789   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:35.166852   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:35.178363   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:35.178415   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:35.189964   16966 logs.go:276] 0 containers: []
	W0520 04:38:35.189974   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:35.190019   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:35.201530   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:35.201547   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:35.201552   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:35.235766   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:38:35.235780   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:38:35.247013   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:35.247025   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:38:35.261236   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:35.261246   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:35.272892   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:35.272905   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:35.287721   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:35.287733   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:35.298901   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:35.298916   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:35.310274   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:38:35.310283   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:38:35.321415   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:35.321425   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:35.333271   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:35.333284   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:35.367055   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:35.367066   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:35.371044   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:35.371053   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:35.389509   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:35.389521   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:35.400711   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:35.400720   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:35.418028   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:35.418038   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:37.943700   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:42.946428   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:42.946734   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:42.977000   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:42.977121   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:42.995735   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:42.995817   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:43.009618   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:38:43.009693   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:43.021283   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:43.021346   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:43.032815   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:43.032883   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:43.043529   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:43.043598   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:43.053688   16966 logs.go:276] 0 containers: []
	W0520 04:38:43.053699   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:43.053750   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:43.064688   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:43.064704   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:43.064709   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:43.086847   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:43.086859   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:38:43.100950   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:38:43.100962   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:38:43.112553   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:43.112563   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:43.124747   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:38:43.124756   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:38:43.136321   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:43.136333   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:43.147797   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:43.147810   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:43.162815   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:43.162825   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:43.186567   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:43.186576   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:43.219102   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:43.219110   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:43.255895   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:43.255907   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:43.273007   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:43.273018   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:43.284516   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:43.284529   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:43.295855   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:43.295865   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:43.300453   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:43.300460   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:45.819810   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:50.822052   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:50.822511   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:50.863745   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:50.863873   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:50.890660   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:50.890758   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:50.904706   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:38:50.904778   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:50.917016   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:50.917075   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:50.927636   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:50.927704   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:50.938892   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:50.938958   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:50.954521   16966 logs.go:276] 0 containers: []
	W0520 04:38:50.954535   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:50.954584   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:50.965133   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:50.965151   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:50.965156   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:38:50.979444   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:50.979456   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:50.994394   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:50.994403   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:51.005598   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:51.005609   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:51.029206   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:51.029214   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:51.062689   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:51.062698   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:51.097432   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:51.097446   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:51.109434   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:51.109446   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:51.121446   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:51.121458   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:51.142402   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:51.142414   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:51.146775   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:51.146784   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:51.158619   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:51.158628   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:51.170865   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:51.170878   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:51.190701   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:38:51.190714   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:38:51.202103   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:38:51.202113   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:38:53.718548   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:38:58.720943   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:38:58.721014   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:38:58.732342   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:38:58.732399   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:38:58.743652   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:38:58.743720   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:38:58.755634   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:38:58.755699   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:38:58.768196   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:38:58.768249   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:38:58.779004   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:38:58.779061   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:38:58.791628   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:38:58.791702   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:38:58.803449   16966 logs.go:276] 0 containers: []
	W0520 04:38:58.803460   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:38:58.803502   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:38:58.814780   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:38:58.814796   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:38:58.814801   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:38:58.818948   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:38:58.818954   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:38:58.831323   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:38:58.831332   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:38:58.858285   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:38:58.858298   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:38:58.871141   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:38:58.871155   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:38:58.884065   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:38:58.884080   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:38:58.898381   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:38:58.898391   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:38:58.910577   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:38:58.910588   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:38:58.923671   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:38:58.923684   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:38:58.964628   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:38:58.964643   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:38:58.981258   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:38:58.981269   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:38:59.008478   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:38:59.008488   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:38:59.024389   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:38:59.024399   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:38:59.062256   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:38:59.062277   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:38:59.086241   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:38:59.086252   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:01.603255   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:39:06.605854   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:39:06.605957   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:39:06.619060   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:39:06.619132   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:39:06.631348   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:39:06.631420   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:39:06.643899   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:39:06.643974   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:39:06.658022   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:39:06.658088   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:39:06.669004   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:39:06.669066   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:39:06.679776   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:39:06.679836   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:39:06.690049   16966 logs.go:276] 0 containers: []
	W0520 04:39:06.690059   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:39:06.690111   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:39:06.701064   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:39:06.701080   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:39:06.701084   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:39:06.717699   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:39:06.717710   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:39:06.733214   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:39:06.733225   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:39:06.745549   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:39:06.745563   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:39:06.767206   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:39:06.767215   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:39:06.803345   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:39:06.803353   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:39:06.808150   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:39:06.808160   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:06.822218   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:39:06.822226   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:39:06.834342   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:39:06.834358   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:39:06.846998   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:39:06.847010   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:39:06.884346   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:39:06.884356   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:39:06.902769   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:39:06.902781   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:39:06.916451   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:39:06.916461   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:39:06.927683   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:39:06.927693   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:39:06.941731   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:39:06.941742   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:39:09.468673   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:39:14.469482   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:39:14.469619   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:39:14.480367   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:39:14.480442   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:39:14.491026   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:39:14.491097   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:39:14.501291   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:39:14.501363   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:39:14.511634   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:39:14.511705   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:39:14.526876   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:39:14.526935   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:39:14.537206   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:39:14.537270   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:39:14.547208   16966 logs.go:276] 0 containers: []
	W0520 04:39:14.547218   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:39:14.547267   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:39:14.562099   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:39:14.562117   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:39:14.562122   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:14.575933   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:39:14.575946   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:39:14.592987   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:39:14.592997   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:39:14.607262   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:39:14.607275   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:39:14.640999   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:39:14.641006   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:39:14.657295   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:39:14.657304   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:39:14.670393   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:39:14.670406   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:39:14.695102   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:39:14.695109   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:39:14.706594   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:39:14.706606   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:39:14.711323   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:39:14.711332   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:39:14.748781   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:39:14.748795   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:39:14.760380   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:39:14.760390   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:39:14.771437   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:39:14.771448   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:39:14.783232   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:39:14.783240   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:39:14.795247   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:39:14.795258   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:39:17.309032   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:39:22.309984   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:39:22.310072   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:39:22.321650   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:39:22.321720   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:39:22.333466   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:39:22.333514   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:39:22.343886   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:39:22.343960   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:39:22.356910   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:39:22.356970   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:39:22.368356   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:39:22.368408   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:39:22.379543   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:39:22.379596   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:39:22.390056   16966 logs.go:276] 0 containers: []
	W0520 04:39:22.390069   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:39:22.390120   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:39:22.402325   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:39:22.402340   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:39:22.402347   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:39:22.407740   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:39:22.407754   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:39:22.445089   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:39:22.445100   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:39:22.460514   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:39:22.460525   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:22.474724   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:39:22.474736   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:39:22.496572   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:39:22.496583   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:39:22.510282   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:39:22.510290   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:39:22.522292   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:39:22.522302   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:39:22.534741   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:39:22.534755   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:39:22.553166   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:39:22.553175   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:39:22.588508   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:39:22.588518   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:39:22.601280   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:39:22.601292   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:39:22.615531   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:39:22.615548   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:39:22.631645   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:39:22.631653   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:39:22.655925   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:39:22.655938   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:39:25.169889   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:39:30.172581   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:39:30.172956   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:39:30.213421   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:39:30.213545   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:39:30.233645   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:39:30.233722   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:39:30.247368   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:39:30.247435   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:39:30.258037   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:39:30.258095   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:39:30.268754   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:39:30.268823   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:39:30.279553   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:39:30.279629   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:39:30.290344   16966 logs.go:276] 0 containers: []
	W0520 04:39:30.290353   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:39:30.290400   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:39:30.301180   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:39:30.301201   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:39:30.301206   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:39:30.313505   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:39:30.313518   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:39:30.325791   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:39:30.325801   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:39:30.337282   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:39:30.337292   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:39:30.348954   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:39:30.348964   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:39:30.382135   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:39:30.382143   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:39:30.386058   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:39:30.386066   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:30.400226   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:39:30.400238   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:39:30.411960   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:39:30.411974   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:39:30.432286   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:39:30.432300   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:39:30.444069   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:39:30.444082   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:39:30.467732   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:39:30.467739   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:39:30.502440   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:39:30.502451   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:39:30.517583   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:39:30.517592   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:39:30.532601   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:39:30.532611   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:39:33.053052   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:39:38.055292   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:39:38.055641   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:39:38.094515   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:39:38.094646   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:39:38.117164   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:39:38.117269   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:39:38.133089   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:39:38.133170   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:39:38.145013   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:39:38.145080   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:39:38.156338   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:39:38.156403   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:39:38.166894   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:39:38.166957   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:39:38.179109   16966 logs.go:276] 0 containers: []
	W0520 04:39:38.179119   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:39:38.179171   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:39:38.189577   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:39:38.189593   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:39:38.189598   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:39:38.224380   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:39:38.224393   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:39:38.239650   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:39:38.239660   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:39:38.251646   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:39:38.251658   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:39:38.264363   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:39:38.264373   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:39:38.283692   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:39:38.283719   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:39:38.296284   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:39:38.296298   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:39:38.329443   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:39:38.329452   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:39:38.341376   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:39:38.341388   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:39:38.361411   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:39:38.361420   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:39:38.373320   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:39:38.373329   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:39:38.385119   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:39:38.385129   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:39:38.396637   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:39:38.396649   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:39:38.400820   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:39:38.400828   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:38.414615   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:39:38.414628   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:39:40.941796   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:39:45.944512   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:39:45.944579   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:39:45.955480   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:39:45.955540   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:39:45.966223   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:39:45.966298   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:39:45.978703   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:39:45.978771   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:39:45.992064   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:39:45.992113   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:39:46.002678   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:39:46.002732   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:39:46.013875   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:39:46.013925   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:39:46.025468   16966 logs.go:276] 0 containers: []
	W0520 04:39:46.025482   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:39:46.025553   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:39:46.037227   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:39:46.037242   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:39:46.037247   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:39:46.074951   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:39:46.074963   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:39:46.087365   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:39:46.087375   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:39:46.104521   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:39:46.104533   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:39:46.109113   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:39:46.109124   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:39:46.120906   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:39:46.120918   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:39:46.133597   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:39:46.133608   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:39:46.145864   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:39:46.145874   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:39:46.158579   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:39:46.158590   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:39:46.195535   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:39:46.195546   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:39:46.216931   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:39:46.216946   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:39:46.243558   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:39:46.243573   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:46.260879   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:39:46.260888   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:39:46.273176   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:39:46.273185   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:39:46.301271   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:39:46.301289   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:39:48.823148   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:39:53.825933   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:39:53.826414   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:39:53.867649   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:39:53.867779   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:39:53.889487   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:39:53.889599   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:39:53.904405   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:39:53.904481   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:39:53.916867   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:39:53.916924   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:39:53.928227   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:39:53.928289   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:39:53.939644   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:39:53.939721   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:39:53.949822   16966 logs.go:276] 0 containers: []
	W0520 04:39:53.949835   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:39:53.949885   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:39:53.960254   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:39:53.960269   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:39:53.960274   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:39:53.971342   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:39:53.971351   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:39:53.983373   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:39:53.983382   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:39:53.994631   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:39:53.994642   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:39:54.011377   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:39:54.011387   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:39:54.034042   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:39:54.034052   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:39:54.038459   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:39:54.038468   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:39:54.052142   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:39:54.052151   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:39:54.063396   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:39:54.063407   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:39:54.076028   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:39:54.076041   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:39:54.087333   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:39:54.087343   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:39:54.102299   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:39:54.102309   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:39:54.126683   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:39:54.126697   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:39:54.142989   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:39:54.143002   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:39:54.177291   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:39:54.177300   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:39:56.718099   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:40:01.720371   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:40:01.720786   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:40:01.764941   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:40:01.765051   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:40:01.783624   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:40:01.783707   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:40:01.796640   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:40:01.796712   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:40:01.808626   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:40:01.808687   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:40:01.823890   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:40:01.823962   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:40:01.835195   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:40:01.835265   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:40:01.845630   16966 logs.go:276] 0 containers: []
	W0520 04:40:01.845642   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:40:01.845696   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:40:01.856013   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:40:01.856034   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:40:01.856042   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:40:01.871926   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:40:01.871935   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:40:01.876669   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:40:01.876678   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:40:01.894529   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:40:01.894538   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:40:01.906371   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:40:01.906381   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:40:01.917825   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:40:01.917835   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:40:01.929651   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:40:01.929663   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:40:01.954214   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:40:01.954222   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:40:01.988383   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:40:01.988395   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:40:02.002686   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:40:02.002698   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:40:02.016874   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:40:02.016883   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:40:02.029705   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:40:02.029718   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:40:02.041392   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:40:02.041405   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:40:02.074798   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:40:02.074807   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:40:02.086553   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:40:02.086564   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:40:04.599453   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:40:09.602144   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:40:09.602224   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 04:40:09.614062   16966 logs.go:276] 1 containers: [4ac7cffa689c]
	I0520 04:40:09.614117   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 04:40:09.626853   16966 logs.go:276] 1 containers: [6fe281f52f2e]
	I0520 04:40:09.626904   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 04:40:09.643650   16966 logs.go:276] 4 containers: [ce05992d81e6 cdc4ee44ea1c e63043bb5ff5 b288581dbbdf]
	I0520 04:40:09.643706   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 04:40:09.654516   16966 logs.go:276] 1 containers: [586adc8801c3]
	I0520 04:40:09.654585   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 04:40:09.668370   16966 logs.go:276] 1 containers: [ef62dabdde63]
	I0520 04:40:09.668429   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 04:40:09.679493   16966 logs.go:276] 1 containers: [6a76df3470cf]
	I0520 04:40:09.679547   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 04:40:09.690515   16966 logs.go:276] 0 containers: []
	W0520 04:40:09.690528   16966 logs.go:278] No container was found matching "kindnet"
	I0520 04:40:09.690577   16966 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 04:40:09.701894   16966 logs.go:276] 1 containers: [2776628c96d9]
	I0520 04:40:09.701909   16966 logs.go:123] Gathering logs for storage-provisioner [2776628c96d9] ...
	I0520 04:40:09.701915   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2776628c96d9"
	I0520 04:40:09.715157   16966 logs.go:123] Gathering logs for container status ...
	I0520 04:40:09.715166   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 04:40:09.728222   16966 logs.go:123] Gathering logs for describe nodes ...
	I0520 04:40:09.728235   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 04:40:09.765639   16966 logs.go:123] Gathering logs for kube-apiserver [4ac7cffa689c] ...
	I0520 04:40:09.765651   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ac7cffa689c"
	I0520 04:40:09.780921   16966 logs.go:123] Gathering logs for etcd [6fe281f52f2e] ...
	I0520 04:40:09.780931   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fe281f52f2e"
	I0520 04:40:09.795695   16966 logs.go:123] Gathering logs for coredns [ce05992d81e6] ...
	I0520 04:40:09.795708   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce05992d81e6"
	I0520 04:40:09.808187   16966 logs.go:123] Gathering logs for coredns [e63043bb5ff5] ...
	I0520 04:40:09.808199   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e63043bb5ff5"
	I0520 04:40:09.820555   16966 logs.go:123] Gathering logs for coredns [b288581dbbdf] ...
	I0520 04:40:09.820568   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b288581dbbdf"
	I0520 04:40:09.832971   16966 logs.go:123] Gathering logs for dmesg ...
	I0520 04:40:09.832981   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 04:40:09.837946   16966 logs.go:123] Gathering logs for kube-scheduler [586adc8801c3] ...
	I0520 04:40:09.837954   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586adc8801c3"
	I0520 04:40:09.854336   16966 logs.go:123] Gathering logs for kube-controller-manager [6a76df3470cf] ...
	I0520 04:40:09.854349   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a76df3470cf"
	I0520 04:40:09.873503   16966 logs.go:123] Gathering logs for Docker ...
	I0520 04:40:09.873514   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 04:40:09.897429   16966 logs.go:123] Gathering logs for kubelet ...
	I0520 04:40:09.897443   16966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 04:40:09.932434   16966 logs.go:123] Gathering logs for coredns [cdc4ee44ea1c] ...
	I0520 04:40:09.932452   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc4ee44ea1c"
	I0520 04:40:09.945741   16966 logs.go:123] Gathering logs for kube-proxy [ef62dabdde63] ...
	I0520 04:40:09.945754   16966 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef62dabdde63"
	I0520 04:40:12.461835   16966 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 04:40:17.464575   16966 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 04:40:17.471896   16966 out.go:177] 
	W0520 04:40:17.475766   16966 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 04:40:17.475788   16966 out.go:239] * 
	* 
	W0520 04:40:17.477571   16966 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:40:17.491659   16966 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-484000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (576.65s)

                                                
                                    
x
+
TestPause/serial/Start (10.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-455000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-455000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.028750417s)

                                                
                                                
-- stdout --
	* [pause-455000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-455000" primary control-plane node in "pause-455000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-455000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-455000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-455000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-455000 -n pause-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-455000 -n pause-455000: exit status 7 (64.345375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-455000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-217000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-217000 --driver=qemu2 : exit status 80 (9.923531583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-217000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-217000" primary control-plane node in "NoKubernetes-217000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-217000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-217000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-217000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000: exit status 7 (55.7245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --driver=qemu2 : exit status 80 (5.221224834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-217000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-217000
	* Restarting existing qemu2 VM for "NoKubernetes-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000: exit status 7 (63.533959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2436955s)

                                                
                                                
-- stdout --
	* [NoKubernetes-217000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-217000
	* Restarting existing qemu2 VM for "NoKubernetes-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000: exit status 7 (55.675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-217000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-217000 --driver=qemu2 : exit status 80 (5.246943708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-217000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-217000
	* Restarting existing qemu2 VM for "NoKubernetes-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-217000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-217000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-217000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-217000 -n NoKubernetes-217000: exit status 7 (41.022167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-217000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.929975709s)

                                                
                                                
-- stdout --
	* [auto-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-645000" primary control-plane node in "auto-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:38:34.731604   17216 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:38:34.731739   17216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:38:34.731742   17216 out.go:304] Setting ErrFile to fd 2...
	I0520 04:38:34.731744   17216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:38:34.731878   17216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:38:34.732973   17216 out.go:298] Setting JSON to false
	I0520 04:38:34.750802   17216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9485,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:38:34.750873   17216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:38:34.755401   17216 out.go:177] * [auto-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:38:34.762575   17216 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:38:34.766531   17216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:38:34.762626   17216 notify.go:220] Checking for updates...
	I0520 04:38:34.772548   17216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:38:34.775567   17216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:38:34.778503   17216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:38:34.781520   17216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:38:34.784811   17216 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:38:34.784883   17216 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:38:34.784934   17216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:38:34.789507   17216 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:38:34.795528   17216 start.go:297] selected driver: qemu2
	I0520 04:38:34.795533   17216 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:38:34.795539   17216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:38:34.797664   17216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:38:34.800594   17216 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:38:34.803595   17216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:38:34.803615   17216 cni.go:84] Creating CNI manager for ""
	I0520 04:38:34.803622   17216 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:38:34.803626   17216 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:38:34.803661   17216 start.go:340] cluster config:
	{Name:auto-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:38:34.807884   17216 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:38:34.814511   17216 out.go:177] * Starting "auto-645000" primary control-plane node in "auto-645000" cluster
	I0520 04:38:34.818559   17216 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:38:34.818576   17216 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:38:34.818584   17216 cache.go:56] Caching tarball of preloaded images
	I0520 04:38:34.818635   17216 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:38:34.818640   17216 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:38:34.818693   17216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/auto-645000/config.json ...
	I0520 04:38:34.818703   17216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/auto-645000/config.json: {Name:mk07c314f2b49b11cc9e5c77ab66db737b9a497d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:38:34.819008   17216 start.go:360] acquireMachinesLock for auto-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:38:34.819038   17216 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "auto-645000"
	I0520 04:38:34.819049   17216 start.go:93] Provisioning new machine with config: &{Name:auto-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:38:34.819072   17216 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:38:34.827555   17216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:38:34.842123   17216 start.go:159] libmachine.API.Create for "auto-645000" (driver="qemu2")
	I0520 04:38:34.842153   17216 client.go:168] LocalClient.Create starting
	I0520 04:38:34.842213   17216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:38:34.842251   17216 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:34.842261   17216 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:34.842296   17216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:38:34.842322   17216 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:34.842329   17216 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:34.842772   17216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:38:35.104909   17216 main.go:141] libmachine: Creating SSH key...
	I0520 04:38:35.171210   17216 main.go:141] libmachine: Creating Disk image...
	I0520 04:38:35.171222   17216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:38:35.171458   17216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2
	I0520 04:38:35.185540   17216 main.go:141] libmachine: STDOUT: 
	I0520 04:38:35.185561   17216 main.go:141] libmachine: STDERR: 
	I0520 04:38:35.185620   17216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2 +20000M
	I0520 04:38:35.198068   17216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:38:35.198090   17216 main.go:141] libmachine: STDERR: 
	I0520 04:38:35.198107   17216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2
	I0520 04:38:35.198113   17216 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:38:35.198169   17216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:25:24:7c:36:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2
	I0520 04:38:35.200294   17216 main.go:141] libmachine: STDOUT: 
	I0520 04:38:35.200315   17216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:38:35.200334   17216 client.go:171] duration metric: took 358.18075ms to LocalClient.Create
	I0520 04:38:37.202539   17216 start.go:128] duration metric: took 2.383465917s to createHost
	I0520 04:38:37.202614   17216 start.go:83] releasing machines lock for "auto-645000", held for 2.383595708s
	W0520 04:38:37.202696   17216 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:38:37.211035   17216 out.go:177] * Deleting "auto-645000" in qemu2 ...
	W0520 04:38:37.236690   17216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:38:37.236728   17216 start.go:728] Will try again in 5 seconds ...
	I0520 04:38:42.238953   17216 start.go:360] acquireMachinesLock for auto-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:38:42.239394   17216 start.go:364] duration metric: took 359.958µs to acquireMachinesLock for "auto-645000"
	I0520 04:38:42.239563   17216 start.go:93] Provisioning new machine with config: &{Name:auto-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:38:42.239791   17216 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:38:42.245353   17216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:38:42.279289   17216 start.go:159] libmachine.API.Create for "auto-645000" (driver="qemu2")
	I0520 04:38:42.279338   17216 client.go:168] LocalClient.Create starting
	I0520 04:38:42.279455   17216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:38:42.279506   17216 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:42.279518   17216 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:42.279578   17216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:38:42.279613   17216 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:42.279625   17216 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:42.280181   17216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:38:42.425082   17216 main.go:141] libmachine: Creating SSH key...
	I0520 04:38:42.566989   17216 main.go:141] libmachine: Creating Disk image...
	I0520 04:38:42.566999   17216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:38:42.567232   17216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2
	I0520 04:38:42.579720   17216 main.go:141] libmachine: STDOUT: 
	I0520 04:38:42.579747   17216 main.go:141] libmachine: STDERR: 
	I0520 04:38:42.579814   17216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2 +20000M
	I0520 04:38:42.591147   17216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:38:42.591167   17216 main.go:141] libmachine: STDERR: 
	I0520 04:38:42.591184   17216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2
	I0520 04:38:42.591189   17216 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:38:42.591222   17216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:98:31:3d:1d:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/auto-645000/disk.qcow2
	I0520 04:38:42.592975   17216 main.go:141] libmachine: STDOUT: 
	I0520 04:38:42.592990   17216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:38:42.593002   17216 client.go:171] duration metric: took 313.661958ms to LocalClient.Create
	I0520 04:38:44.595198   17216 start.go:128] duration metric: took 2.35538s to createHost
	I0520 04:38:44.595272   17216 start.go:83] releasing machines lock for "auto-645000", held for 2.355889542s
	W0520 04:38:44.595729   17216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:38:44.603376   17216 out.go:177] 
	W0520 04:38:44.609352   17216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:38:44.609406   17216 out.go:239] * 
	* 
	W0520 04:38:44.611919   17216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:38:44.621273   17216 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.834341667s)

                                                
                                                
-- stdout --
	* [flannel-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-645000" primary control-plane node in "flannel-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:38:46.895824   17326 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:38:46.895984   17326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:38:46.895987   17326 out.go:304] Setting ErrFile to fd 2...
	I0520 04:38:46.895989   17326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:38:46.896126   17326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:38:46.897188   17326 out.go:298] Setting JSON to false
	I0520 04:38:46.913644   17326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9497,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:38:46.913705   17326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:38:46.919555   17326 out.go:177] * [flannel-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:38:46.926799   17326 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:38:46.930801   17326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:38:46.926898   17326 notify.go:220] Checking for updates...
	I0520 04:38:46.936740   17326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:38:46.939758   17326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:38:46.942735   17326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:38:46.945772   17326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:38:46.947640   17326 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:38:46.947707   17326 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:38:46.947746   17326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:38:46.951709   17326 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:38:46.958603   17326 start.go:297] selected driver: qemu2
	I0520 04:38:46.958610   17326 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:38:46.958616   17326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:38:46.960858   17326 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:38:46.963773   17326 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:38:46.966811   17326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:38:46.966829   17326 cni.go:84] Creating CNI manager for "flannel"
	I0520 04:38:46.966832   17326 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0520 04:38:46.966861   17326 start.go:340] cluster config:
	{Name:flannel-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:flannel-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:38:46.971262   17326 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:38:46.978723   17326 out.go:177] * Starting "flannel-645000" primary control-plane node in "flannel-645000" cluster
	I0520 04:38:46.982861   17326 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:38:46.982878   17326 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:38:46.982888   17326 cache.go:56] Caching tarball of preloaded images
	I0520 04:38:46.982957   17326 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:38:46.982963   17326 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:38:46.983020   17326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/flannel-645000/config.json ...
	I0520 04:38:46.983031   17326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/flannel-645000/config.json: {Name:mk70c6484bfc5d62db1c9cb8cfdfcf70155051d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:38:46.983230   17326 start.go:360] acquireMachinesLock for flannel-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:38:46.983260   17326 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "flannel-645000"
	I0520 04:38:46.983271   17326 start.go:93] Provisioning new machine with config: &{Name:flannel-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:38:46.983297   17326 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:38:46.991745   17326 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:38:47.006745   17326 start.go:159] libmachine.API.Create for "flannel-645000" (driver="qemu2")
	I0520 04:38:47.006769   17326 client.go:168] LocalClient.Create starting
	I0520 04:38:47.006832   17326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:38:47.006862   17326 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:47.006875   17326 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:47.006915   17326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:38:47.006937   17326 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:47.006947   17326 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:47.007266   17326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:38:47.145529   17326 main.go:141] libmachine: Creating SSH key...
	I0520 04:38:47.225874   17326 main.go:141] libmachine: Creating Disk image...
	I0520 04:38:47.225889   17326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:38:47.226128   17326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2
	I0520 04:38:47.239149   17326 main.go:141] libmachine: STDOUT: 
	I0520 04:38:47.239170   17326 main.go:141] libmachine: STDERR: 
	I0520 04:38:47.239226   17326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2 +20000M
	I0520 04:38:47.250485   17326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:38:47.250504   17326 main.go:141] libmachine: STDERR: 
	I0520 04:38:47.250516   17326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2
	I0520 04:38:47.250521   17326 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:38:47.250558   17326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:2f:4d:70:fa:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2
	I0520 04:38:47.252319   17326 main.go:141] libmachine: STDOUT: 
	I0520 04:38:47.252338   17326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:38:47.252356   17326 client.go:171] duration metric: took 245.582375ms to LocalClient.Create
	I0520 04:38:49.254626   17326 start.go:128] duration metric: took 2.271331709s to createHost
	I0520 04:38:49.254697   17326 start.go:83] releasing machines lock for "flannel-645000", held for 2.271457583s
	W0520 04:38:49.254758   17326 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:38:49.261778   17326 out.go:177] * Deleting "flannel-645000" in qemu2 ...
	W0520 04:38:49.285470   17326 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:38:49.285498   17326 start.go:728] Will try again in 5 seconds ...
	I0520 04:38:54.287564   17326 start.go:360] acquireMachinesLock for flannel-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:38:54.287787   17326 start.go:364] duration metric: took 166.541µs to acquireMachinesLock for "flannel-645000"
	I0520 04:38:54.287812   17326 start.go:93] Provisioning new machine with config: &{Name:flannel-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:38:54.287943   17326 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:38:54.296197   17326 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:38:54.312029   17326 start.go:159] libmachine.API.Create for "flannel-645000" (driver="qemu2")
	I0520 04:38:54.312061   17326 client.go:168] LocalClient.Create starting
	I0520 04:38:54.312141   17326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:38:54.312184   17326 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:54.312194   17326 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:54.312236   17326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:38:54.312258   17326 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:54.312265   17326 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:54.312572   17326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:38:54.451133   17326 main.go:141] libmachine: Creating SSH key...
	I0520 04:38:54.635995   17326 main.go:141] libmachine: Creating Disk image...
	I0520 04:38:54.636008   17326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:38:54.636232   17326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2
	I0520 04:38:54.649153   17326 main.go:141] libmachine: STDOUT: 
	I0520 04:38:54.649177   17326 main.go:141] libmachine: STDERR: 
	I0520 04:38:54.649239   17326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2 +20000M
	I0520 04:38:54.660555   17326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:38:54.660576   17326 main.go:141] libmachine: STDERR: 
	I0520 04:38:54.660587   17326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2
	I0520 04:38:54.660590   17326 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:38:54.660629   17326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:4a:f5:ae:1a:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/flannel-645000/disk.qcow2
	I0520 04:38:54.662468   17326 main.go:141] libmachine: STDOUT: 
	I0520 04:38:54.662483   17326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:38:54.662495   17326 client.go:171] duration metric: took 350.435042ms to LocalClient.Create
	I0520 04:38:56.664784   17326 start.go:128] duration metric: took 2.37682325s to createHost
	I0520 04:38:56.664962   17326 start.go:83] releasing machines lock for "flannel-645000", held for 2.377193166s
	W0520 04:38:56.665346   17326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:38:56.672963   17326 out.go:177] 
	W0520 04:38:56.677922   17326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:38:56.677979   17326 out.go:239] * 
	* 
	W0520 04:38:56.680642   17326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:38:56.688911   17326 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.851701708s)

                                                
                                                
-- stdout --
	* [kindnet-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-645000" primary control-plane node in "kindnet-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:38:59.070327   17444 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:38:59.070476   17444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:38:59.070479   17444 out.go:304] Setting ErrFile to fd 2...
	I0520 04:38:59.070481   17444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:38:59.070609   17444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:38:59.071860   17444 out.go:298] Setting JSON to false
	I0520 04:38:59.089773   17444 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9510,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:38:59.089873   17444 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:38:59.097651   17444 out.go:177] * [kindnet-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:38:59.105686   17444 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:38:59.109697   17444 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:38:59.105712   17444 notify.go:220] Checking for updates...
	I0520 04:38:59.115670   17444 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:38:59.118703   17444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:38:59.121677   17444 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:38:59.124558   17444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:38:59.127973   17444 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:38:59.128037   17444 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:38:59.128079   17444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:38:59.132670   17444 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:38:59.139697   17444 start.go:297] selected driver: qemu2
	I0520 04:38:59.139704   17444 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:38:59.139710   17444 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:38:59.141850   17444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:38:59.145656   17444 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:38:59.148767   17444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:38:59.148789   17444 cni.go:84] Creating CNI manager for "kindnet"
	I0520 04:38:59.148797   17444 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 04:38:59.148835   17444 start.go:340] cluster config:
	{Name:kindnet-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:38:59.153005   17444 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:38:59.158819   17444 out.go:177] * Starting "kindnet-645000" primary control-plane node in "kindnet-645000" cluster
	I0520 04:38:59.162665   17444 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:38:59.162686   17444 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:38:59.162699   17444 cache.go:56] Caching tarball of preloaded images
	I0520 04:38:59.162767   17444 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:38:59.162772   17444 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:38:59.162831   17444 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/kindnet-645000/config.json ...
	I0520 04:38:59.162842   17444 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/kindnet-645000/config.json: {Name:mk88b8b2c2a38766acb5b0ac005ece2d52088736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:38:59.163057   17444 start.go:360] acquireMachinesLock for kindnet-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:38:59.163089   17444 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "kindnet-645000"
	I0520 04:38:59.163100   17444 start.go:93] Provisioning new machine with config: &{Name:kindnet-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:38:59.163128   17444 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:38:59.167685   17444 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:38:59.183134   17444 start.go:159] libmachine.API.Create for "kindnet-645000" (driver="qemu2")
	I0520 04:38:59.183159   17444 client.go:168] LocalClient.Create starting
	I0520 04:38:59.183213   17444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:38:59.183246   17444 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:59.183266   17444 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:59.183303   17444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:38:59.183326   17444 main.go:141] libmachine: Decoding PEM data...
	I0520 04:38:59.183332   17444 main.go:141] libmachine: Parsing certificate...
	I0520 04:38:59.183656   17444 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:38:59.321672   17444 main.go:141] libmachine: Creating SSH key...
	I0520 04:38:59.390459   17444 main.go:141] libmachine: Creating Disk image...
	I0520 04:38:59.390476   17444 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:38:59.390687   17444 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2
	I0520 04:38:59.403598   17444 main.go:141] libmachine: STDOUT: 
	I0520 04:38:59.403617   17444 main.go:141] libmachine: STDERR: 
	I0520 04:38:59.403669   17444 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2 +20000M
	I0520 04:38:59.415020   17444 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:38:59.415045   17444 main.go:141] libmachine: STDERR: 
	I0520 04:38:59.415063   17444 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2
	I0520 04:38:59.415069   17444 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:38:59.415095   17444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a0:d6:63:a6:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2
	I0520 04:38:59.416886   17444 main.go:141] libmachine: STDOUT: 
	I0520 04:38:59.416906   17444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:38:59.416925   17444 client.go:171] duration metric: took 233.764375ms to LocalClient.Create
	I0520 04:39:01.419125   17444 start.go:128] duration metric: took 2.255999541s to createHost
	I0520 04:39:01.419210   17444 start.go:83] releasing machines lock for "kindnet-645000", held for 2.25613s
	W0520 04:39:01.419366   17444 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:01.426863   17444 out.go:177] * Deleting "kindnet-645000" in qemu2 ...
	W0520 04:39:01.452472   17444 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:01.452507   17444 start.go:728] Will try again in 5 seconds ...
	I0520 04:39:06.454780   17444 start.go:360] acquireMachinesLock for kindnet-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:06.455433   17444 start.go:364] duration metric: took 520.958µs to acquireMachinesLock for "kindnet-645000"
	I0520 04:39:06.455591   17444 start.go:93] Provisioning new machine with config: &{Name:kindnet-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:06.455993   17444 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:06.464695   17444 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:06.510917   17444 start.go:159] libmachine.API.Create for "kindnet-645000" (driver="qemu2")
	I0520 04:39:06.510967   17444 client.go:168] LocalClient.Create starting
	I0520 04:39:06.511089   17444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:06.511148   17444 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:06.511164   17444 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:06.511230   17444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:06.511276   17444 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:06.511288   17444 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:06.511833   17444 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:06.660761   17444 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:06.824173   17444 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:06.824185   17444 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:06.824424   17444 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2
	I0520 04:39:06.838440   17444 main.go:141] libmachine: STDOUT: 
	I0520 04:39:06.838463   17444 main.go:141] libmachine: STDERR: 
	I0520 04:39:06.838546   17444 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2 +20000M
	I0520 04:39:06.851726   17444 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:06.851755   17444 main.go:141] libmachine: STDERR: 
	I0520 04:39:06.851771   17444 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2
	I0520 04:39:06.851775   17444 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:06.851817   17444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:cc:5f:c2:dc:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kindnet-645000/disk.qcow2
	I0520 04:39:06.853984   17444 main.go:141] libmachine: STDOUT: 
	I0520 04:39:06.854011   17444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:06.854031   17444 client.go:171] duration metric: took 343.064167ms to LocalClient.Create
	I0520 04:39:08.856239   17444 start.go:128] duration metric: took 2.400233833s to createHost
	I0520 04:39:08.856320   17444 start.go:83] releasing machines lock for "kindnet-645000", held for 2.400891208s
	W0520 04:39:08.856785   17444 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:08.863248   17444 out.go:177] 
	W0520 04:39:08.868588   17444 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:39:08.868622   17444 out.go:239] * 
	* 
	W0520 04:39:08.871226   17444 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:39:08.878462   17444 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.80286125s)

                                                
                                                
-- stdout --
	* [enable-default-cni-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-645000" primary control-plane node in "enable-default-cni-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:39:11.154400   17561 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:39:11.154510   17561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:11.154513   17561 out.go:304] Setting ErrFile to fd 2...
	I0520 04:39:11.154515   17561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:11.154642   17561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:39:11.155731   17561 out.go:298] Setting JSON to false
	I0520 04:39:11.172168   17561 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9522,"bootTime":1716195629,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:39:11.172238   17561 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:39:11.178814   17561 out.go:177] * [enable-default-cni-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:39:11.186745   17561 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:39:11.186816   17561 notify.go:220] Checking for updates...
	I0520 04:39:11.193720   17561 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:39:11.196740   17561 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:39:11.199791   17561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:39:11.202779   17561 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:39:11.205744   17561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:39:11.209150   17561 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:39:11.209218   17561 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:39:11.209266   17561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:39:11.213850   17561 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:39:11.220718   17561 start.go:297] selected driver: qemu2
	I0520 04:39:11.220726   17561 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:39:11.220731   17561 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:39:11.222990   17561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:39:11.225745   17561 out.go:177] * Automatically selected the socket_vmnet network
	E0520 04:39:11.228757   17561 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0520 04:39:11.228773   17561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:39:11.228789   17561 cni.go:84] Creating CNI manager for "bridge"
	I0520 04:39:11.228793   17561 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:39:11.228829   17561 start.go:340] cluster config:
	{Name:enable-default-cni-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:39:11.233229   17561 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:39:11.240763   17561 out.go:177] * Starting "enable-default-cni-645000" primary control-plane node in "enable-default-cni-645000" cluster
	I0520 04:39:11.244715   17561 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:39:11.244733   17561 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:39:11.244746   17561 cache.go:56] Caching tarball of preloaded images
	I0520 04:39:11.244811   17561 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:39:11.244817   17561 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:39:11.244881   17561 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/enable-default-cni-645000/config.json ...
	I0520 04:39:11.244892   17561 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/enable-default-cni-645000/config.json: {Name:mk9b00b5d9607f7f779f73e97dc661bd3f1a916e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:39:11.245097   17561 start.go:360] acquireMachinesLock for enable-default-cni-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:11.245130   17561 start.go:364] duration metric: took 24.666µs to acquireMachinesLock for "enable-default-cni-645000"
	I0520 04:39:11.245142   17561 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:11.245164   17561 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:11.253684   17561 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:11.268621   17561 start.go:159] libmachine.API.Create for "enable-default-cni-645000" (driver="qemu2")
	I0520 04:39:11.268653   17561 client.go:168] LocalClient.Create starting
	I0520 04:39:11.268708   17561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:11.268738   17561 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:11.268746   17561 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:11.268782   17561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:11.268804   17561 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:11.268811   17561 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:11.269169   17561 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:11.407510   17561 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:11.472885   17561 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:11.472892   17561 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:11.473107   17561 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2
	I0520 04:39:11.485569   17561 main.go:141] libmachine: STDOUT: 
	I0520 04:39:11.485593   17561 main.go:141] libmachine: STDERR: 
	I0520 04:39:11.485660   17561 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2 +20000M
	I0520 04:39:11.496917   17561 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:11.496939   17561 main.go:141] libmachine: STDERR: 
	I0520 04:39:11.496965   17561 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2
	I0520 04:39:11.496970   17561 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:11.497003   17561 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ac:16:4d:d0:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2
	I0520 04:39:11.498775   17561 main.go:141] libmachine: STDOUT: 
	I0520 04:39:11.498789   17561 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:11.498810   17561 client.go:171] duration metric: took 230.154708ms to LocalClient.Create
	I0520 04:39:13.501058   17561 start.go:128] duration metric: took 2.255892666s to createHost
	I0520 04:39:13.501137   17561 start.go:83] releasing machines lock for "enable-default-cni-645000", held for 2.256025375s
	W0520 04:39:13.501240   17561 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:13.512610   17561 out.go:177] * Deleting "enable-default-cni-645000" in qemu2 ...
	W0520 04:39:13.540749   17561 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:13.540790   17561 start.go:728] Will try again in 5 seconds ...
	I0520 04:39:18.542981   17561 start.go:360] acquireMachinesLock for enable-default-cni-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:18.543508   17561 start.go:364] duration metric: took 426.583µs to acquireMachinesLock for "enable-default-cni-645000"
	I0520 04:39:18.543648   17561 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:18.544040   17561 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:18.551792   17561 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:18.604125   17561 start.go:159] libmachine.API.Create for "enable-default-cni-645000" (driver="qemu2")
	I0520 04:39:18.604176   17561 client.go:168] LocalClient.Create starting
	I0520 04:39:18.604296   17561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:18.604365   17561 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:18.604390   17561 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:18.604450   17561 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:18.604495   17561 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:18.604509   17561 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:18.605094   17561 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:18.757153   17561 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:18.860357   17561 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:18.860366   17561 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:18.860590   17561 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2
	I0520 04:39:18.873128   17561 main.go:141] libmachine: STDOUT: 
	I0520 04:39:18.873150   17561 main.go:141] libmachine: STDERR: 
	I0520 04:39:18.873202   17561 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2 +20000M
	I0520 04:39:18.884257   17561 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:18.884278   17561 main.go:141] libmachine: STDERR: 
	I0520 04:39:18.884290   17561 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2
	I0520 04:39:18.884295   17561 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:18.884321   17561 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:be:48:7c:aa:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/enable-default-cni-645000/disk.qcow2
	I0520 04:39:18.886090   17561 main.go:141] libmachine: STDOUT: 
	I0520 04:39:18.886114   17561 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:18.886127   17561 client.go:171] duration metric: took 281.948084ms to LocalClient.Create
	I0520 04:39:20.888298   17561 start.go:128] duration metric: took 2.344253584s to createHost
	I0520 04:39:20.888365   17561 start.go:83] releasing machines lock for "enable-default-cni-645000", held for 2.344860125s
	W0520 04:39:20.888768   17561 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:20.896362   17561 out.go:177] 
	W0520 04:39:20.902483   17561 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:39:20.902535   17561 out.go:239] * 
	* 
	W0520 04:39:20.905266   17561 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:39:20.915389   17561 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.778479333s)

                                                
                                                
-- stdout --
	* [bridge-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-645000" primary control-plane node in "bridge-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:39:23.102865   17675 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:39:23.102984   17675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:23.102987   17675 out.go:304] Setting ErrFile to fd 2...
	I0520 04:39:23.102989   17675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:23.103120   17675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:39:23.104257   17675 out.go:298] Setting JSON to false
	I0520 04:39:23.120727   17675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9534,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:39:23.120801   17675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:39:23.125588   17675 out.go:177] * [bridge-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:39:23.133532   17675 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:39:23.133605   17675 notify.go:220] Checking for updates...
	I0520 04:39:23.138672   17675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:39:23.141654   17675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:39:23.144646   17675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:39:23.147623   17675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:39:23.150624   17675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:39:23.153935   17675 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:39:23.153999   17675 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:39:23.154044   17675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:39:23.158614   17675 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:39:23.165615   17675 start.go:297] selected driver: qemu2
	I0520 04:39:23.165622   17675 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:39:23.165630   17675 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:39:23.167906   17675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:39:23.170584   17675 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:39:23.173666   17675 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:39:23.173682   17675 cni.go:84] Creating CNI manager for "bridge"
	I0520 04:39:23.173685   17675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:39:23.173719   17675 start.go:340] cluster config:
	{Name:bridge-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:39:23.178206   17675 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:39:23.185606   17675 out.go:177] * Starting "bridge-645000" primary control-plane node in "bridge-645000" cluster
	I0520 04:39:23.189602   17675 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:39:23.189617   17675 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:39:23.189632   17675 cache.go:56] Caching tarball of preloaded images
	I0520 04:39:23.189689   17675 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:39:23.189694   17675 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:39:23.189743   17675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/bridge-645000/config.json ...
	I0520 04:39:23.189753   17675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/bridge-645000/config.json: {Name:mke8da420e916d0165b355e91b46448b5177266c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:39:23.190065   17675 start.go:360] acquireMachinesLock for bridge-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:23.190110   17675 start.go:364] duration metric: took 39.25µs to acquireMachinesLock for "bridge-645000"
	I0520 04:39:23.190120   17675 start.go:93] Provisioning new machine with config: &{Name:bridge-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:23.190148   17675 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:23.198601   17675 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:23.214670   17675 start.go:159] libmachine.API.Create for "bridge-645000" (driver="qemu2")
	I0520 04:39:23.214706   17675 client.go:168] LocalClient.Create starting
	I0520 04:39:23.214800   17675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:23.214858   17675 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:23.214871   17675 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:23.214917   17675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:23.214940   17675 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:23.214952   17675 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:23.215318   17675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:23.429658   17675 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:23.491737   17675 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:23.491748   17675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:23.491963   17675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2
	I0520 04:39:23.504402   17675 main.go:141] libmachine: STDOUT: 
	I0520 04:39:23.504423   17675 main.go:141] libmachine: STDERR: 
	I0520 04:39:23.504487   17675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2 +20000M
	I0520 04:39:23.515652   17675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:23.515668   17675 main.go:141] libmachine: STDERR: 
	I0520 04:39:23.515692   17675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2
	I0520 04:39:23.515699   17675 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:23.515730   17675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:61:ad:79:05:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2
	I0520 04:39:23.517535   17675 main.go:141] libmachine: STDOUT: 
	I0520 04:39:23.517550   17675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:23.517570   17675 client.go:171] duration metric: took 302.861833ms to LocalClient.Create
	I0520 04:39:25.519753   17675 start.go:128] duration metric: took 2.32960425s to createHost
	I0520 04:39:25.519829   17675 start.go:83] releasing machines lock for "bridge-645000", held for 2.329739125s
	W0520 04:39:25.519961   17675 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:25.526457   17675 out.go:177] * Deleting "bridge-645000" in qemu2 ...
	W0520 04:39:25.554688   17675 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:25.554726   17675 start.go:728] Will try again in 5 seconds ...
	I0520 04:39:30.556728   17675 start.go:360] acquireMachinesLock for bridge-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:30.556838   17675 start.go:364] duration metric: took 88.708µs to acquireMachinesLock for "bridge-645000"
	I0520 04:39:30.556878   17675 start.go:93] Provisioning new machine with config: &{Name:bridge-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:30.556944   17675 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:30.564169   17675 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:30.579156   17675 start.go:159] libmachine.API.Create for "bridge-645000" (driver="qemu2")
	I0520 04:39:30.579180   17675 client.go:168] LocalClient.Create starting
	I0520 04:39:30.579247   17675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:30.579284   17675 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:30.579295   17675 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:30.579326   17675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:30.579347   17675 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:30.579353   17675 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:30.579730   17675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:30.718216   17675 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:30.789538   17675 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:30.789544   17675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:30.789749   17675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2
	I0520 04:39:30.802463   17675 main.go:141] libmachine: STDOUT: 
	I0520 04:39:30.802497   17675 main.go:141] libmachine: STDERR: 
	I0520 04:39:30.802560   17675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2 +20000M
	I0520 04:39:30.814134   17675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:30.814163   17675 main.go:141] libmachine: STDERR: 
	I0520 04:39:30.814175   17675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2
	I0520 04:39:30.814180   17675 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:30.814216   17675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:db:62:fb:e1:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/bridge-645000/disk.qcow2
	I0520 04:39:30.816095   17675 main.go:141] libmachine: STDOUT: 
	I0520 04:39:30.816114   17675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:30.816127   17675 client.go:171] duration metric: took 236.947083ms to LocalClient.Create
	I0520 04:39:32.818308   17675 start.go:128] duration metric: took 2.261361542s to createHost
	I0520 04:39:32.818482   17675 start.go:83] releasing machines lock for "bridge-645000", held for 2.261561625s
	W0520 04:39:32.818799   17675 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:32.826501   17675 out.go:177] 
	W0520 04:39:32.831512   17675 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:39:32.831580   17675 out.go:239] * 
	* 
	W0520 04:39:32.834485   17675 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:39:32.840547   17675 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.825920666s)

                                                
                                                
-- stdout --
	* [kubenet-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-645000" primary control-plane node in "kubenet-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:39:35.049955   17790 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:39:35.050085   17790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:35.050088   17790 out.go:304] Setting ErrFile to fd 2...
	I0520 04:39:35.050091   17790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:35.050225   17790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:39:35.051367   17790 out.go:298] Setting JSON to false
	I0520 04:39:35.067754   17790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9546,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:39:35.067817   17790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:39:35.074154   17790 out.go:177] * [kubenet-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:39:35.082081   17790 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:39:35.082176   17790 notify.go:220] Checking for updates...
	I0520 04:39:35.089149   17790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:39:35.090506   17790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:39:35.093168   17790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:39:35.096166   17790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:39:35.099227   17790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:39:35.102591   17790 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:39:35.102658   17790 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:39:35.102705   17790 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:39:35.107218   17790 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:39:35.114127   17790 start.go:297] selected driver: qemu2
	I0520 04:39:35.114136   17790 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:39:35.114143   17790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:39:35.116394   17790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:39:35.119183   17790 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:39:35.122277   17790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:39:35.122302   17790 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0520 04:39:35.122335   17790 start.go:340] cluster config:
	{Name:kubenet-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubenet-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:39:35.126970   17790 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:39:35.134198   17790 out.go:177] * Starting "kubenet-645000" primary control-plane node in "kubenet-645000" cluster
	I0520 04:39:35.137111   17790 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:39:35.137130   17790 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:39:35.137150   17790 cache.go:56] Caching tarball of preloaded images
	I0520 04:39:35.137227   17790 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:39:35.137247   17790 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:39:35.137310   17790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/kubenet-645000/config.json ...
	I0520 04:39:35.137321   17790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/kubenet-645000/config.json: {Name:mkd52e48c22c5199c35d6fdf039e283b85795696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:39:35.137656   17790 start.go:360] acquireMachinesLock for kubenet-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:35.137693   17790 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "kubenet-645000"
	I0520 04:39:35.137706   17790 start.go:93] Provisioning new machine with config: &{Name:kubenet-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:35.137749   17790 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:35.146049   17790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:35.163324   17790 start.go:159] libmachine.API.Create for "kubenet-645000" (driver="qemu2")
	I0520 04:39:35.163360   17790 client.go:168] LocalClient.Create starting
	I0520 04:39:35.163423   17790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:35.163455   17790 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:35.163470   17790 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:35.163508   17790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:35.163530   17790 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:35.163537   17790 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:35.163998   17790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:35.301571   17790 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:35.370724   17790 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:35.370730   17790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:35.370948   17790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2
	I0520 04:39:35.383506   17790 main.go:141] libmachine: STDOUT: 
	I0520 04:39:35.383522   17790 main.go:141] libmachine: STDERR: 
	I0520 04:39:35.383604   17790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2 +20000M
	I0520 04:39:35.394835   17790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:35.394858   17790 main.go:141] libmachine: STDERR: 
	I0520 04:39:35.394867   17790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2
	I0520 04:39:35.394873   17790 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:35.394912   17790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:06:e5:1b:62:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2
	I0520 04:39:35.396731   17790 main.go:141] libmachine: STDOUT: 
	I0520 04:39:35.396752   17790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:35.396784   17790 client.go:171] duration metric: took 233.405333ms to LocalClient.Create
	I0520 04:39:37.398980   17790 start.go:128] duration metric: took 2.261227542s to createHost
	I0520 04:39:37.399131   17790 start.go:83] releasing machines lock for "kubenet-645000", held for 2.261420875s
	W0520 04:39:37.399226   17790 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:37.413252   17790 out.go:177] * Deleting "kubenet-645000" in qemu2 ...
	W0520 04:39:37.438231   17790 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:37.438277   17790 start.go:728] Will try again in 5 seconds ...
	I0520 04:39:42.440501   17790 start.go:360] acquireMachinesLock for kubenet-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:42.441072   17790 start.go:364] duration metric: took 454.958µs to acquireMachinesLock for "kubenet-645000"
	I0520 04:39:42.441184   17790 start.go:93] Provisioning new machine with config: &{Name:kubenet-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:42.441473   17790 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:42.450045   17790 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:42.493341   17790 start.go:159] libmachine.API.Create for "kubenet-645000" (driver="qemu2")
	I0520 04:39:42.493389   17790 client.go:168] LocalClient.Create starting
	I0520 04:39:42.493514   17790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:42.493597   17790 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:42.493612   17790 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:42.493680   17790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:42.493724   17790 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:42.493737   17790 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:42.494312   17790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:42.639975   17790 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:42.784600   17790 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:42.784608   17790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:42.784816   17790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2
	I0520 04:39:42.797839   17790 main.go:141] libmachine: STDOUT: 
	I0520 04:39:42.797860   17790 main.go:141] libmachine: STDERR: 
	I0520 04:39:42.797915   17790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2 +20000M
	I0520 04:39:42.809125   17790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:42.809145   17790 main.go:141] libmachine: STDERR: 
	I0520 04:39:42.809155   17790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2
	I0520 04:39:42.809159   17790 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:42.809192   17790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:cb:f0:60:b2:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/kubenet-645000/disk.qcow2
	I0520 04:39:42.811022   17790 main.go:141] libmachine: STDOUT: 
	I0520 04:39:42.811044   17790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:42.811066   17790 client.go:171] duration metric: took 317.675083ms to LocalClient.Create
	I0520 04:39:44.813164   17790 start.go:128] duration metric: took 2.371689s to createHost
	I0520 04:39:44.813191   17790 start.go:83] releasing machines lock for "kubenet-645000", held for 2.372095959s
	W0520 04:39:44.813361   17790 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:44.822616   17790 out.go:177] 
	W0520 04:39:44.828809   17790 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:39:44.828821   17790 out.go:239] * 
	* 
	W0520 04:39:44.829863   17790 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:39:44.838729   17790 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.677938583s)

                                                
                                                
-- stdout --
	* [custom-flannel-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-645000" primary control-plane node in "custom-flannel-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:39:47.038152   17900 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:39:47.038288   17900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:47.038292   17900 out.go:304] Setting ErrFile to fd 2...
	I0520 04:39:47.038294   17900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:47.038416   17900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:39:47.039498   17900 out.go:298] Setting JSON to false
	I0520 04:39:47.055873   17900 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9558,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:39:47.055941   17900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:39:47.060998   17900 out.go:177] * [custom-flannel-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:39:47.069175   17900 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:39:47.074156   17900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:39:47.069295   17900 notify.go:220] Checking for updates...
	I0520 04:39:47.080185   17900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:39:47.083191   17900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:39:47.086193   17900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:39:47.089195   17900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:39:47.090980   17900 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:39:47.091042   17900 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:39:47.091095   17900 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:39:47.095171   17900 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:39:47.101968   17900 start.go:297] selected driver: qemu2
	I0520 04:39:47.101975   17900 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:39:47.101981   17900 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:39:47.104297   17900 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:39:47.107192   17900 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:39:47.110362   17900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:39:47.110382   17900 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0520 04:39:47.110392   17900 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0520 04:39:47.110437   17900 start.go:340] cluster config:
	{Name:custom-flannel-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:39:47.114587   17900 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:39:47.122186   17900 out.go:177] * Starting "custom-flannel-645000" primary control-plane node in "custom-flannel-645000" cluster
	I0520 04:39:47.126194   17900 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:39:47.126211   17900 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:39:47.126222   17900 cache.go:56] Caching tarball of preloaded images
	I0520 04:39:47.126274   17900 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:39:47.126279   17900 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:39:47.126343   17900 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/custom-flannel-645000/config.json ...
	I0520 04:39:47.126354   17900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/custom-flannel-645000/config.json: {Name:mkd056bb3a4301ccc4c73eae3cbd833e6ca5e0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:39:47.126559   17900 start.go:360] acquireMachinesLock for custom-flannel-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:47.126594   17900 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "custom-flannel-645000"
	I0520 04:39:47.126605   17900 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:47.126630   17900 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:47.131145   17900 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:47.145764   17900 start.go:159] libmachine.API.Create for "custom-flannel-645000" (driver="qemu2")
	I0520 04:39:47.145788   17900 client.go:168] LocalClient.Create starting
	I0520 04:39:47.145844   17900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:47.145876   17900 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:47.145888   17900 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:47.145929   17900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:47.145951   17900 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:47.145958   17900 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:47.146341   17900 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:47.282984   17900 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:47.324096   17900 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:47.324101   17900 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:47.324293   17900 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2
	I0520 04:39:47.336903   17900 main.go:141] libmachine: STDOUT: 
	I0520 04:39:47.336922   17900 main.go:141] libmachine: STDERR: 
	I0520 04:39:47.336973   17900 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2 +20000M
	I0520 04:39:47.348034   17900 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:47.348054   17900 main.go:141] libmachine: STDERR: 
	I0520 04:39:47.348069   17900 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2
	I0520 04:39:47.348074   17900 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:47.348104   17900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:bf:b4:39:06:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2
	I0520 04:39:47.350052   17900 main.go:141] libmachine: STDOUT: 
	I0520 04:39:47.350066   17900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:47.350086   17900 client.go:171] duration metric: took 204.29475ms to LocalClient.Create
	I0520 04:39:49.352410   17900 start.go:128] duration metric: took 2.225755042s to createHost
	I0520 04:39:49.352544   17900 start.go:83] releasing machines lock for "custom-flannel-645000", held for 2.225962125s
	W0520 04:39:49.352612   17900 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:49.363818   17900 out.go:177] * Deleting "custom-flannel-645000" in qemu2 ...
	W0520 04:39:49.382751   17900 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:49.382775   17900 start.go:728] Will try again in 5 seconds ...
	I0520 04:39:54.384852   17900 start.go:360] acquireMachinesLock for custom-flannel-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:54.385173   17900 start.go:364] duration metric: took 273.417µs to acquireMachinesLock for "custom-flannel-645000"
	I0520 04:39:54.385264   17900 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:54.385398   17900 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:54.395922   17900 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:54.432296   17900 start.go:159] libmachine.API.Create for "custom-flannel-645000" (driver="qemu2")
	I0520 04:39:54.432342   17900 client.go:168] LocalClient.Create starting
	I0520 04:39:54.432474   17900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:54.432538   17900 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:54.432552   17900 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:54.432655   17900 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:54.432693   17900 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:54.432703   17900 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:54.433235   17900 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:54.577237   17900 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:54.627684   17900 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:54.627691   17900 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:54.627893   17900 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2
	I0520 04:39:54.640598   17900 main.go:141] libmachine: STDOUT: 
	I0520 04:39:54.640619   17900 main.go:141] libmachine: STDERR: 
	I0520 04:39:54.640678   17900 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2 +20000M
	I0520 04:39:54.651886   17900 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:54.651922   17900 main.go:141] libmachine: STDERR: 
	I0520 04:39:54.651933   17900 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2
	I0520 04:39:54.651939   17900 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:54.651972   17900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e3:f5:ca:0d:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/custom-flannel-645000/disk.qcow2
	I0520 04:39:54.653874   17900 main.go:141] libmachine: STDOUT: 
	I0520 04:39:54.653890   17900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:54.653903   17900 client.go:171] duration metric: took 221.5585ms to LocalClient.Create
	I0520 04:39:56.656208   17900 start.go:128] duration metric: took 2.2708075s to createHost
	I0520 04:39:56.656292   17900 start.go:83] releasing machines lock for "custom-flannel-645000", held for 2.271126208s
	W0520 04:39:56.656615   17900 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:39:56.666200   17900 out.go:177] 
	W0520 04:39:56.670506   17900 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:39:56.670571   17900 out.go:239] * 
	* 
	W0520 04:39:56.672797   17900 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:39:56.681428   17900 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.736941125s)

                                                
                                                
-- stdout --
	* [calico-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-645000" primary control-plane node in "calico-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:39:59.044206   18021 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:39:59.044350   18021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:59.044353   18021 out.go:304] Setting ErrFile to fd 2...
	I0520 04:39:59.044356   18021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:39:59.044485   18021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:39:59.045595   18021 out.go:298] Setting JSON to false
	I0520 04:39:59.062060   18021 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9570,"bootTime":1716195629,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:39:59.062128   18021 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:39:59.068220   18021 out.go:177] * [calico-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:39:59.076390   18021 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:39:59.076469   18021 notify.go:220] Checking for updates...
	I0520 04:39:59.080277   18021 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:39:59.083347   18021 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:39:59.086345   18021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:39:59.089218   18021 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:39:59.092373   18021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:39:59.095720   18021 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:39:59.095797   18021 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:39:59.095850   18021 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:39:59.099284   18021 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:39:59.106349   18021 start.go:297] selected driver: qemu2
	I0520 04:39:59.106356   18021 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:39:59.106366   18021 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:39:59.108523   18021 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:39:59.111302   18021 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:39:59.114452   18021 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:39:59.114475   18021 cni.go:84] Creating CNI manager for "calico"
	I0520 04:39:59.114479   18021 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0520 04:39:59.114522   18021 start.go:340] cluster config:
	{Name:calico-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:calico-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:39:59.118655   18021 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:39:59.124226   18021 out.go:177] * Starting "calico-645000" primary control-plane node in "calico-645000" cluster
	I0520 04:39:59.128370   18021 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:39:59.128385   18021 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:39:59.128400   18021 cache.go:56] Caching tarball of preloaded images
	I0520 04:39:59.128452   18021 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:39:59.128457   18021 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:39:59.128527   18021 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/calico-645000/config.json ...
	I0520 04:39:59.128537   18021 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/calico-645000/config.json: {Name:mk684cc065758153839c65f06202ee11f61dc82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:39:59.128792   18021 start.go:360] acquireMachinesLock for calico-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:39:59.128821   18021 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "calico-645000"
	I0520 04:39:59.128832   18021 start.go:93] Provisioning new machine with config: &{Name:calico-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:39:59.128858   18021 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:39:59.137289   18021 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:39:59.152715   18021 start.go:159] libmachine.API.Create for "calico-645000" (driver="qemu2")
	I0520 04:39:59.152739   18021 client.go:168] LocalClient.Create starting
	I0520 04:39:59.152794   18021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:39:59.152822   18021 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:59.152835   18021 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:59.152878   18021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:39:59.152901   18021 main.go:141] libmachine: Decoding PEM data...
	I0520 04:39:59.152914   18021 main.go:141] libmachine: Parsing certificate...
	I0520 04:39:59.153250   18021 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:39:59.291471   18021 main.go:141] libmachine: Creating SSH key...
	I0520 04:39:59.351057   18021 main.go:141] libmachine: Creating Disk image...
	I0520 04:39:59.351062   18021 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:39:59.351264   18021 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2
	I0520 04:39:59.363843   18021 main.go:141] libmachine: STDOUT: 
	I0520 04:39:59.363863   18021 main.go:141] libmachine: STDERR: 
	I0520 04:39:59.363913   18021 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2 +20000M
	I0520 04:39:59.375471   18021 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:39:59.375489   18021 main.go:141] libmachine: STDERR: 
	I0520 04:39:59.375502   18021 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2
	I0520 04:39:59.375507   18021 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:39:59.375545   18021 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e8:95:21:c9:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2
	I0520 04:39:59.377373   18021 main.go:141] libmachine: STDOUT: 
	I0520 04:39:59.377388   18021 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:39:59.377405   18021 client.go:171] duration metric: took 224.662917ms to LocalClient.Create
	I0520 04:40:01.379507   18021 start.go:128] duration metric: took 2.250661125s to createHost
	I0520 04:40:01.379549   18021 start.go:83] releasing machines lock for "calico-645000", held for 2.250749416s
	W0520 04:40:01.379608   18021 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:01.387162   18021 out.go:177] * Deleting "calico-645000" in qemu2 ...
	W0520 04:40:01.410794   18021 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:01.410814   18021 start.go:728] Will try again in 5 seconds ...
	I0520 04:40:06.412902   18021 start.go:360] acquireMachinesLock for calico-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:06.413159   18021 start.go:364] duration metric: took 211.25µs to acquireMachinesLock for "calico-645000"
	I0520 04:40:06.413219   18021 start.go:93] Provisioning new machine with config: &{Name:calico-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:06.413337   18021 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:06.419703   18021 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:40:06.446530   18021 start.go:159] libmachine.API.Create for "calico-645000" (driver="qemu2")
	I0520 04:40:06.446576   18021 client.go:168] LocalClient.Create starting
	I0520 04:40:06.446686   18021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:06.446739   18021 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:06.446752   18021 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:06.446794   18021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:06.446825   18021 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:06.446840   18021 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:06.447218   18021 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:06.587512   18021 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:06.683712   18021 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:06.683718   18021 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:06.683921   18021 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2
	I0520 04:40:06.696422   18021 main.go:141] libmachine: STDOUT: 
	I0520 04:40:06.696450   18021 main.go:141] libmachine: STDERR: 
	I0520 04:40:06.696500   18021 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2 +20000M
	I0520 04:40:06.708927   18021 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:06.708950   18021 main.go:141] libmachine: STDERR: 
	I0520 04:40:06.708968   18021 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2
	I0520 04:40:06.708972   18021 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:06.709004   18021 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:48:b8:2e:9d:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/calico-645000/disk.qcow2
	I0520 04:40:06.711155   18021 main.go:141] libmachine: STDOUT: 
	I0520 04:40:06.711183   18021 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:06.711195   18021 client.go:171] duration metric: took 264.610292ms to LocalClient.Create
	I0520 04:40:08.713400   18021 start.go:128] duration metric: took 2.300050333s to createHost
	I0520 04:40:08.713483   18021 start.go:83] releasing machines lock for "calico-645000", held for 2.300337667s
	W0520 04:40:08.713850   18021 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:08.724018   18021 out.go:177] 
	W0520 04:40:08.729518   18021 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:40:08.729576   18021 out.go:239] * 
	* 
	W0520 04:40:08.732292   18021 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:40:08.739503   18021 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-645000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.80308525s)

                                                
                                                
-- stdout --
	* [false-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-645000" primary control-plane node in "false-645000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-645000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:40:11.180695   18141 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:40:11.180851   18141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:11.180854   18141 out.go:304] Setting ErrFile to fd 2...
	I0520 04:40:11.180857   18141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:11.180968   18141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:40:11.182015   18141 out.go:298] Setting JSON to false
	I0520 04:40:11.198047   18141 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9582,"bootTime":1716195629,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:40:11.198121   18141 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:40:11.202333   18141 out.go:177] * [false-645000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:40:11.210199   18141 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:40:11.214256   18141 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:40:11.210247   18141 notify.go:220] Checking for updates...
	I0520 04:40:11.220205   18141 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:40:11.223245   18141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:40:11.224575   18141 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:40:11.227230   18141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:40:11.230619   18141 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:40:11.230683   18141 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:40:11.230730   18141 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:40:11.235104   18141 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:40:11.242191   18141 start.go:297] selected driver: qemu2
	I0520 04:40:11.242197   18141 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:40:11.242202   18141 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:40:11.244427   18141 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:40:11.247226   18141 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:40:11.250329   18141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:40:11.250349   18141 cni.go:84] Creating CNI manager for "false"
	I0520 04:40:11.250396   18141 start.go:340] cluster config:
	{Name:false-645000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:false-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:11.254607   18141 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:11.261154   18141 out.go:177] * Starting "false-645000" primary control-plane node in "false-645000" cluster
	I0520 04:40:11.265217   18141 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:40:11.265233   18141 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:40:11.265246   18141 cache.go:56] Caching tarball of preloaded images
	I0520 04:40:11.265301   18141 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:40:11.265306   18141 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:40:11.265376   18141 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/false-645000/config.json ...
	I0520 04:40:11.265387   18141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/false-645000/config.json: {Name:mk48b8cd2d46da7fdc40a26950eba816d2214e9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:40:11.265600   18141 start.go:360] acquireMachinesLock for false-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:11.265631   18141 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "false-645000"
	I0520 04:40:11.265642   18141 start.go:93] Provisioning new machine with config: &{Name:false-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:11.265672   18141 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:11.269245   18141 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:40:11.283973   18141 start.go:159] libmachine.API.Create for "false-645000" (driver="qemu2")
	I0520 04:40:11.284000   18141 client.go:168] LocalClient.Create starting
	I0520 04:40:11.284061   18141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:11.284093   18141 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:11.284106   18141 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:11.284142   18141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:11.284164   18141 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:11.284173   18141 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:11.284546   18141 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:11.423854   18141 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:11.554400   18141 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:11.554413   18141 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:11.554624   18141 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2
	I0520 04:40:11.567675   18141 main.go:141] libmachine: STDOUT: 
	I0520 04:40:11.567695   18141 main.go:141] libmachine: STDERR: 
	I0520 04:40:11.567755   18141 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2 +20000M
	I0520 04:40:11.578867   18141 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:11.578888   18141 main.go:141] libmachine: STDERR: 
	I0520 04:40:11.578904   18141 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2
	I0520 04:40:11.578908   18141 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:11.578939   18141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a5:ab:8b:27:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2
	I0520 04:40:11.580710   18141 main.go:141] libmachine: STDOUT: 
	I0520 04:40:11.580730   18141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:11.580748   18141 client.go:171] duration metric: took 296.746208ms to LocalClient.Create
	I0520 04:40:13.582830   18141 start.go:128] duration metric: took 2.317174834s to createHost
	I0520 04:40:13.582895   18141 start.go:83] releasing machines lock for "false-645000", held for 2.317287333s
	W0520 04:40:13.582941   18141 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:13.592414   18141 out.go:177] * Deleting "false-645000" in qemu2 ...
	W0520 04:40:13.612063   18141 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:13.612076   18141 start.go:728] Will try again in 5 seconds ...
	I0520 04:40:18.614173   18141 start.go:360] acquireMachinesLock for false-645000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:18.614359   18141 start.go:364] duration metric: took 146.25µs to acquireMachinesLock for "false-645000"
	I0520 04:40:18.614384   18141 start.go:93] Provisioning new machine with config: &{Name:false-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:18.614477   18141 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:18.622370   18141 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 04:40:18.642552   18141 start.go:159] libmachine.API.Create for "false-645000" (driver="qemu2")
	I0520 04:40:18.642586   18141 client.go:168] LocalClient.Create starting
	I0520 04:40:18.642667   18141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:18.642706   18141 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:18.642716   18141 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:18.642755   18141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:18.642785   18141 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:18.642800   18141 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:18.643073   18141 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:18.781568   18141 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:18.884365   18141 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:18.884373   18141 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:18.884603   18141 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2
	I0520 04:40:18.897664   18141 main.go:141] libmachine: STDOUT: 
	I0520 04:40:18.897684   18141 main.go:141] libmachine: STDERR: 
	I0520 04:40:18.897738   18141 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2 +20000M
	I0520 04:40:18.909031   18141 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:18.909054   18141 main.go:141] libmachine: STDERR: 
	I0520 04:40:18.909077   18141 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2
	I0520 04:40:18.909083   18141 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:18.909117   18141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:88:c6:c4:35:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/false-645000/disk.qcow2
	I0520 04:40:18.910986   18141 main.go:141] libmachine: STDOUT: 
	I0520 04:40:18.911005   18141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:18.911019   18141 client.go:171] duration metric: took 268.432458ms to LocalClient.Create
	I0520 04:40:20.913207   18141 start.go:128] duration metric: took 2.298726875s to createHost
	I0520 04:40:20.913345   18141 start.go:83] releasing machines lock for "false-645000", held for 2.298992208s
	W0520 04:40:20.913781   18141 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-645000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:20.926598   18141 out.go:177] 
	W0520 04:40:20.930663   18141 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:40:20.930681   18141 out.go:239] * 
	* 
	W0520 04:40:20.933194   18141 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:40:20.943502   18141 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.819523208s)

                                                
                                                
-- stdout --
	* [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-178000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:40:23.073293   18259 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:40:23.073425   18259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:23.073431   18259 out.go:304] Setting ErrFile to fd 2...
	I0520 04:40:23.073433   18259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:23.073577   18259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:40:23.074730   18259 out.go:298] Setting JSON to false
	I0520 04:40:23.090824   18259 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9594,"bootTime":1716195629,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:40:23.090885   18259 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:40:23.096991   18259 out.go:177] * [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:40:23.104999   18259 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:40:23.105036   18259 notify.go:220] Checking for updates...
	I0520 04:40:23.111982   18259 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:40:23.115053   18259 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:40:23.118031   18259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:40:23.121003   18259 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:40:23.124013   18259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:40:23.127231   18259 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:40:23.127298   18259 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:40:23.127347   18259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:40:23.130971   18259 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:40:23.137923   18259 start.go:297] selected driver: qemu2
	I0520 04:40:23.137929   18259 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:40:23.137934   18259 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:40:23.140049   18259 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:40:23.142970   18259 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:40:23.146115   18259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:40:23.146137   18259 cni.go:84] Creating CNI manager for ""
	I0520 04:40:23.146144   18259 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:40:23.146188   18259 start.go:340] cluster config:
	{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:23.150373   18259 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:23.157973   18259 out.go:177] * Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	I0520 04:40:23.160889   18259 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:40:23.160906   18259 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:40:23.160919   18259 cache.go:56] Caching tarball of preloaded images
	I0520 04:40:23.160982   18259 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:40:23.160988   18259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:40:23.161043   18259 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/old-k8s-version-178000/config.json ...
	I0520 04:40:23.161058   18259 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/old-k8s-version-178000/config.json: {Name:mk34ef960a7425359236aa39b2e1fe71e48d0559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:40:23.161312   18259 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:23.161344   18259 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "old-k8s-version-178000"
	I0520 04:40:23.161356   18259 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:23.161380   18259 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:23.167986   18259 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:40:23.183858   18259 start.go:159] libmachine.API.Create for "old-k8s-version-178000" (driver="qemu2")
	I0520 04:40:23.183888   18259 client.go:168] LocalClient.Create starting
	I0520 04:40:23.183959   18259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:23.183987   18259 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:23.183995   18259 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:23.184034   18259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:23.184058   18259 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:23.184066   18259 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:23.184527   18259 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:23.423173   18259 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:23.513882   18259 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:23.513889   18259 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:23.514106   18259 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:23.527079   18259 main.go:141] libmachine: STDOUT: 
	I0520 04:40:23.527110   18259 main.go:141] libmachine: STDERR: 
	I0520 04:40:23.527165   18259 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2 +20000M
	I0520 04:40:23.538857   18259 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:23.538873   18259 main.go:141] libmachine: STDERR: 
	I0520 04:40:23.538894   18259 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:23.538899   18259 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:23.538926   18259 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:f6:52:0e:da:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:23.540738   18259 main.go:141] libmachine: STDOUT: 
	I0520 04:40:23.540755   18259 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:23.540776   18259 client.go:171] duration metric: took 356.887041ms to LocalClient.Create
	I0520 04:40:25.542967   18259 start.go:128] duration metric: took 2.381585333s to createHost
	I0520 04:40:25.543053   18259 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 2.381728708s
	W0520 04:40:25.543163   18259 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:25.552708   18259 out.go:177] * Deleting "old-k8s-version-178000" in qemu2 ...
	W0520 04:40:25.575493   18259 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:25.575526   18259 start.go:728] Will try again in 5 seconds ...
	I0520 04:40:30.577624   18259 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:30.577844   18259 start.go:364] duration metric: took 181.375µs to acquireMachinesLock for "old-k8s-version-178000"
	I0520 04:40:30.577873   18259 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:30.577959   18259 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:30.585291   18259 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:40:30.614507   18259 start.go:159] libmachine.API.Create for "old-k8s-version-178000" (driver="qemu2")
	I0520 04:40:30.614550   18259 client.go:168] LocalClient.Create starting
	I0520 04:40:30.614655   18259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:30.614708   18259 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:30.614720   18259 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:30.614770   18259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:30.614812   18259 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:30.614821   18259 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:30.615329   18259 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:30.754741   18259 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:30.793466   18259 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:30.793475   18259 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:30.793677   18259 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:30.806781   18259 main.go:141] libmachine: STDOUT: 
	I0520 04:40:30.806801   18259 main.go:141] libmachine: STDERR: 
	I0520 04:40:30.806877   18259 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2 +20000M
	I0520 04:40:30.818071   18259 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:30.818098   18259 main.go:141] libmachine: STDERR: 
	I0520 04:40:30.818114   18259 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:30.818118   18259 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:30.818159   18259 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bc:e2:c3:8f:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:30.819903   18259 main.go:141] libmachine: STDOUT: 
	I0520 04:40:30.819919   18259 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:30.819931   18259 client.go:171] duration metric: took 205.377167ms to LocalClient.Create
	I0520 04:40:32.822114   18259 start.go:128] duration metric: took 2.244137833s to createHost
	I0520 04:40:32.822183   18259 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 2.244347291s
	W0520 04:40:32.822612   18259 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:32.831698   18259 out.go:177] 
	W0520 04:40:32.836980   18259 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:40:32.837034   18259 out.go:239] * 
	* 
	W0520 04:40:32.839874   18259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:40:32.850728   18259 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (64.278917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-178000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-178000 create -f testdata/busybox.yaml: exit status 1 (30.362791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-178000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (29.121375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.216084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-178000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-178000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-178000 describe deploy/metrics-server -n kube-system: exit status 1 (26.843916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-178000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.558292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.1825755s)

                                                
                                                
-- stdout --
	* [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:40:37.066837   18326 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:40:37.066961   18326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:37.066965   18326 out.go:304] Setting ErrFile to fd 2...
	I0520 04:40:37.066967   18326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:37.067093   18326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:40:37.068084   18326 out.go:298] Setting JSON to false
	I0520 04:40:37.084171   18326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9608,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:40:37.084236   18326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:40:37.089219   18326 out.go:177] * [old-k8s-version-178000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:40:37.096212   18326 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:40:37.100230   18326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:40:37.096290   18326 notify.go:220] Checking for updates...
	I0520 04:40:37.106251   18326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:40:37.109226   18326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:40:37.112063   18326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:40:37.115205   18326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:40:37.118483   18326 config.go:182] Loaded profile config "old-k8s-version-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 04:40:37.120161   18326 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 04:40:37.123177   18326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:40:37.127136   18326 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:40:37.132174   18326 start.go:297] selected driver: qemu2
	I0520 04:40:37.132181   18326 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:37.132251   18326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:40:37.134568   18326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:40:37.134599   18326 cni.go:84] Creating CNI manager for ""
	I0520 04:40:37.134606   18326 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:40:37.134632   18326 start.go:340] cluster config:
	{Name:old-k8s-version-178000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-178000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:37.138915   18326 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:37.146181   18326 out.go:177] * Starting "old-k8s-version-178000" primary control-plane node in "old-k8s-version-178000" cluster
	I0520 04:40:37.150229   18326 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:40:37.150247   18326 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:40:37.150259   18326 cache.go:56] Caching tarball of preloaded images
	I0520 04:40:37.150325   18326 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:40:37.150331   18326 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:40:37.150400   18326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/old-k8s-version-178000/config.json ...
	I0520 04:40:37.150849   18326 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:37.150877   18326 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "old-k8s-version-178000"
	I0520 04:40:37.150887   18326 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:40:37.150891   18326 fix.go:54] fixHost starting: 
	I0520 04:40:37.151005   18326 fix.go:112] recreateIfNeeded on old-k8s-version-178000: state=Stopped err=<nil>
	W0520 04:40:37.151014   18326 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:40:37.155123   18326 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	I0520 04:40:37.163212   18326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bc:e2:c3:8f:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:37.165246   18326 main.go:141] libmachine: STDOUT: 
	I0520 04:40:37.165266   18326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:37.165295   18326 fix.go:56] duration metric: took 14.402ms for fixHost
	I0520 04:40:37.165298   18326 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 14.417458ms
	W0520 04:40:37.165305   18326 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:40:37.165343   18326 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:37.165347   18326 start.go:728] Will try again in 5 seconds ...
	I0520 04:40:42.167598   18326 start.go:360] acquireMachinesLock for old-k8s-version-178000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:42.168147   18326 start.go:364] duration metric: took 434.916µs to acquireMachinesLock for "old-k8s-version-178000"
	I0520 04:40:42.168334   18326 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:40:42.168356   18326 fix.go:54] fixHost starting: 
	I0520 04:40:42.169190   18326 fix.go:112] recreateIfNeeded on old-k8s-version-178000: state=Stopped err=<nil>
	W0520 04:40:42.169215   18326 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:40:42.174002   18326 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-178000" ...
	I0520 04:40:42.178815   18326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bc:e2:c3:8f:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/old-k8s-version-178000/disk.qcow2
	I0520 04:40:42.188938   18326 main.go:141] libmachine: STDOUT: 
	I0520 04:40:42.188996   18326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:42.189102   18326 fix.go:56] duration metric: took 20.749083ms for fixHost
	I0520 04:40:42.189117   18326 start.go:83] releasing machines lock for "old-k8s-version-178000", held for 20.947291ms
	W0520 04:40:42.189313   18326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:42.196781   18326 out.go:177] 
	W0520 04:40:42.200833   18326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:40:42.200856   18326 out.go:239] * 
	* 
	W0520 04:40:42.202827   18326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:40:42.212707   18326 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-178000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (53.987417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-178000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (31.224583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-178000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.982833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.452042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-178000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.248167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-178000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-178000 --alsologtostderr -v=1: exit status 83 (41.135166ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-178000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-178000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:40:42.461496   18345 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:40:42.462453   18345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:42.462457   18345 out.go:304] Setting ErrFile to fd 2...
	I0520 04:40:42.462459   18345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:42.462623   18345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:40:42.462846   18345 out.go:298] Setting JSON to false
	I0520 04:40:42.462852   18345 mustload.go:65] Loading cluster: old-k8s-version-178000
	I0520 04:40:42.463032   18345 config.go:182] Loaded profile config "old-k8s-version-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 04:40:42.467813   18345 out.go:177] * The control-plane node old-k8s-version-178000 host is not running: state=Stopped
	I0520 04:40:42.470851   18345 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-178000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-178000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.172833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (28.181333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-969000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-969000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (10.099865542s)

                                                
                                                
-- stdout --
	* [no-preload-969000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-969000" primary control-plane node in "no-preload-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:40:42.919575   18368 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:40:42.919718   18368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:42.919721   18368 out.go:304] Setting ErrFile to fd 2...
	I0520 04:40:42.919723   18368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:42.919851   18368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:40:42.920887   18368 out.go:298] Setting JSON to false
	I0520 04:40:42.937263   18368 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9613,"bootTime":1716195629,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:40:42.937331   18368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:40:42.942087   18368 out.go:177] * [no-preload-969000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:40:42.948955   18368 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:40:42.953047   18368 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:40:42.949002   18368 notify.go:220] Checking for updates...
	I0520 04:40:42.958985   18368 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:40:42.962056   18368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:40:42.964880   18368 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:40:42.968010   18368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:40:42.971336   18368 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:40:42.971400   18368 config.go:182] Loaded profile config "stopped-upgrade-484000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 04:40:42.971442   18368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:40:42.975012   18368 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:40:42.982027   18368 start.go:297] selected driver: qemu2
	I0520 04:40:42.982037   18368 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:40:42.982044   18368 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:40:42.984313   18368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:40:42.985505   18368 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:40:42.988126   18368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:40:42.988141   18368 cni.go:84] Creating CNI manager for ""
	I0520 04:40:42.988151   18368 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:40:42.988155   18368 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:40:42.988184   18368 start.go:340] cluster config:
	{Name:no-preload-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:42.992575   18368 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:42.999926   18368 out.go:177] * Starting "no-preload-969000" primary control-plane node in "no-preload-969000" cluster
	I0520 04:40:43.004095   18368 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:40:43.004173   18368 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/no-preload-969000/config.json ...
	I0520 04:40:43.004189   18368 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/no-preload-969000/config.json: {Name:mk52bac8ce315f22b4745d22143ebe367c9a9e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:40:43.004205   18368 cache.go:107] acquiring lock: {Name:mk444a7ecc9a22caf1d26a46ca1e133e693a2457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004202   18368 cache.go:107] acquiring lock: {Name:mk39fdd918e0ddfa85f695b38d22ed352e726f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004271   18368 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 04:40:43.004268   18368 cache.go:107] acquiring lock: {Name:mk2cd06d1ebc1058d22c38f5321f5d936cef7d23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004277   18368 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 73.75µs
	I0520 04:40:43.004283   18368 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 04:40:43.004292   18368 cache.go:107] acquiring lock: {Name:mk543c69021fa2b9b2c9ce52d092381e1045edbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004419   18368 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 04:40:43.004420   18368 cache.go:107] acquiring lock: {Name:mk53d40f955679581c402fc3a6c580ab4e0ed960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004428   18368 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 04:40:43.004370   18368 cache.go:107] acquiring lock: {Name:mk05bf596f604cffc8b3d84a74a73c6df1fcf85e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004433   18368 cache.go:107] acquiring lock: {Name:mk31970d30a33a1181f78fe9a9eb5a5c6558aef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004478   18368 cache.go:107] acquiring lock: {Name:mka0fdc66695a7c5c0f4e1a46eeb0a16be7e8556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:43.004541   18368 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 04:40:43.004544   18368 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 04:40:43.004547   18368 start.go:360] acquireMachinesLock for no-preload-969000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:43.004437   18368 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 04:40:43.004602   18368 start.go:364] duration metric: took 49.583µs to acquireMachinesLock for "no-preload-969000"
	I0520 04:40:43.004613   18368 start.go:93] Provisioning new machine with config: &{Name:no-preload-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:43.004647   18368 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:43.013063   18368 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:40:43.004764   18368 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 04:40:43.004779   18368 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 04:40:43.018305   18368 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 04:40:43.022313   18368 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 04:40:43.022566   18368 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 04:40:43.022812   18368 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 04:40:43.022868   18368 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 04:40:43.022947   18368 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 04:40:43.025217   18368 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 04:40:43.029705   18368 start.go:159] libmachine.API.Create for "no-preload-969000" (driver="qemu2")
	I0520 04:40:43.029724   18368 client.go:168] LocalClient.Create starting
	I0520 04:40:43.029792   18368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:43.029819   18368 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:43.029829   18368 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:43.029867   18368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:43.029889   18368 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:43.029898   18368 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:43.030282   18368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:43.171527   18368 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:43.275423   18368 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:43.275451   18368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:43.275715   18368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:40:43.289677   18368 main.go:141] libmachine: STDOUT: 
	I0520 04:40:43.289691   18368 main.go:141] libmachine: STDERR: 
	I0520 04:40:43.289748   18368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2 +20000M
	I0520 04:40:43.347114   18368 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:43.347133   18368 main.go:141] libmachine: STDERR: 
	I0520 04:40:43.347144   18368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:40:43.347149   18368 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:43.347180   18368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:d3:58:3f:bb:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:40:43.349273   18368 main.go:141] libmachine: STDOUT: 
	I0520 04:40:43.349291   18368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:43.349311   18368 client.go:171] duration metric: took 319.587208ms to LocalClient.Create
	I0520 04:40:43.353958   18368 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 04:40:43.367638   18368 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 04:40:43.368581   18368 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 04:40:43.400372   18368 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0520 04:40:43.418223   18368 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0520 04:40:43.446284   18368 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 04:40:43.478823   18368 cache.go:162] opening:  /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 04:40:43.567657   18368 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 04:40:43.567676   18368 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 563.293666ms
	I0520 04:40:43.567684   18368 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 04:40:45.349553   18368 start.go:128] duration metric: took 2.344923417s to createHost
	I0520 04:40:45.349582   18368 start.go:83] releasing machines lock for "no-preload-969000", held for 2.345005584s
	W0520 04:40:45.349602   18368 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:45.359342   18368 out.go:177] * Deleting "no-preload-969000" in qemu2 ...
	W0520 04:40:45.371342   18368 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:45.371353   18368 start.go:728] Will try again in 5 seconds ...
	I0520 04:40:45.947530   18368 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 04:40:45.947554   18368 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 2.943237083s
	I0520 04:40:45.947568   18368 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 04:40:46.100643   18368 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 04:40:46.100674   18368 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.096314541s
	I0520 04:40:46.100686   18368 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 04:40:46.166401   18368 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 04:40:46.166417   18368 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 3.162206625s
	I0520 04:40:46.166426   18368 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 04:40:46.283435   18368 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 04:40:46.283464   18368 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 3.279081375s
	I0520 04:40:46.283478   18368 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 04:40:48.286813   18368 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 04:40:48.286843   18368 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 5.282715334s
	I0520 04:40:48.286857   18368 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 04:40:50.371553   18368 start.go:360] acquireMachinesLock for no-preload-969000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:50.371999   18368 start.go:364] duration metric: took 378.584µs to acquireMachinesLock for "no-preload-969000"
	I0520 04:40:50.372118   18368 start.go:93] Provisioning new machine with config: &{Name:no-preload-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:50.372351   18368 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:50.381958   18368 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:40:50.417979   18368 start.go:159] libmachine.API.Create for "no-preload-969000" (driver="qemu2")
	I0520 04:40:50.418015   18368 client.go:168] LocalClient.Create starting
	I0520 04:40:50.418115   18368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:50.418173   18368 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:50.418191   18368 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:50.418262   18368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:50.418302   18368 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:50.418315   18368 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:50.418752   18368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:50.671313   18368 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:50.919531   18368 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:50.919542   18368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:50.919762   18368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:40:50.932876   18368 main.go:141] libmachine: STDOUT: 
	I0520 04:40:50.932900   18368 main.go:141] libmachine: STDERR: 
	I0520 04:40:50.932972   18368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2 +20000M
	I0520 04:40:50.944086   18368 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:50.944104   18368 main.go:141] libmachine: STDERR: 
	I0520 04:40:50.944127   18368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:40:50.944130   18368 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:50.944173   18368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:5a:2b:50:3a:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:40:50.945971   18368 main.go:141] libmachine: STDOUT: 
	I0520 04:40:50.945989   18368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:50.946003   18368 client.go:171] duration metric: took 527.989833ms to LocalClient.Create
	I0520 04:40:50.952140   18368 cache.go:157] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 04:40:50.952163   18368 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.947963083s
	I0520 04:40:50.952168   18368 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 04:40:50.952193   18368 cache.go:87] Successfully saved all images to host disk.
	I0520 04:40:52.948126   18368 start.go:128] duration metric: took 2.575771666s to createHost
	I0520 04:40:52.948183   18368 start.go:83] releasing machines lock for "no-preload-969000", held for 2.576199833s
	W0520 04:40:52.948377   18368 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:52.961423   18368 out.go:177] 
	W0520 04:40:52.966436   18368 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:40:52.966494   18368 out.go:239] * 
	* 
	W0520 04:40:52.969379   18368 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:40:52.980417   18368 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-969000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (43.016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-235000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-235000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (12.20576775s)

                                                
                                                
-- stdout --
	* [embed-certs-235000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-235000" primary control-plane node in "embed-certs-235000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-235000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:40:50.610380   18411 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:40:50.610541   18411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:50.610544   18411 out.go:304] Setting ErrFile to fd 2...
	I0520 04:40:50.610547   18411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:50.610688   18411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:40:50.612033   18411 out.go:298] Setting JSON to false
	I0520 04:40:50.631805   18411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9621,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:40:50.631886   18411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:40:50.640946   18411 out.go:177] * [embed-certs-235000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:40:50.653841   18411 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:40:50.649882   18411 notify.go:220] Checking for updates...
	I0520 04:40:50.661864   18411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:40:50.670839   18411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:40:50.676786   18411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:40:50.679865   18411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:40:50.682869   18411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:40:50.686186   18411 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:40:50.686258   18411 config.go:182] Loaded profile config "no-preload-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:40:50.686303   18411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:40:50.689854   18411 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:40:50.697870   18411 start.go:297] selected driver: qemu2
	I0520 04:40:50.697875   18411 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:40:50.697880   18411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:40:50.700049   18411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:40:50.703872   18411 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:40:50.706894   18411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:40:50.706910   18411 cni.go:84] Creating CNI manager for ""
	I0520 04:40:50.706916   18411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:40:50.706922   18411 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:40:50.706946   18411 start.go:340] cluster config:
	{Name:embed-certs-235000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-235000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:50.711091   18411 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:50.718887   18411 out.go:177] * Starting "embed-certs-235000" primary control-plane node in "embed-certs-235000" cluster
	I0520 04:40:50.722776   18411 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:40:50.722788   18411 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:40:50.722799   18411 cache.go:56] Caching tarball of preloaded images
	I0520 04:40:50.722855   18411 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:40:50.722859   18411 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:40:50.722917   18411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/embed-certs-235000/config.json ...
	I0520 04:40:50.722928   18411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/embed-certs-235000/config.json: {Name:mkc8bb8a709a70135b668988f65a150252b14295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:40:50.723113   18411 start.go:360] acquireMachinesLock for embed-certs-235000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:52.948287   18411 start.go:364] duration metric: took 2.225182334s to acquireMachinesLock for "embed-certs-235000"
	I0520 04:40:52.948413   18411 start.go:93] Provisioning new machine with config: &{Name:embed-certs-235000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-235000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:40:52.948539   18411 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:40:52.961397   18411 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:40:53.007363   18411 start.go:159] libmachine.API.Create for "embed-certs-235000" (driver="qemu2")
	I0520 04:40:53.007406   18411 client.go:168] LocalClient.Create starting
	I0520 04:40:53.007524   18411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:40:53.007582   18411 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:53.007598   18411 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:53.007658   18411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:40:53.007700   18411 main.go:141] libmachine: Decoding PEM data...
	I0520 04:40:53.007722   18411 main.go:141] libmachine: Parsing certificate...
	I0520 04:40:53.008267   18411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:40:53.166829   18411 main.go:141] libmachine: Creating SSH key...
	I0520 04:40:53.303563   18411 main.go:141] libmachine: Creating Disk image...
	I0520 04:40:53.303570   18411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:40:53.306446   18411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:40:53.318692   18411 main.go:141] libmachine: STDOUT: 
	I0520 04:40:53.318714   18411 main.go:141] libmachine: STDERR: 
	I0520 04:40:53.318788   18411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2 +20000M
	I0520 04:40:53.329565   18411 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:40:53.329582   18411 main.go:141] libmachine: STDERR: 
	I0520 04:40:53.329594   18411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:40:53.329602   18411 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:40:53.329634   18411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7e:d4:b9:5f:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:40:53.331252   18411 main.go:141] libmachine: STDOUT: 
	I0520 04:40:53.331267   18411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:53.331285   18411 client.go:171] duration metric: took 323.877875ms to LocalClient.Create
	I0520 04:40:55.333458   18411 start.go:128] duration metric: took 2.384912917s to createHost
	I0520 04:40:55.333525   18411 start.go:83] releasing machines lock for "embed-certs-235000", held for 2.385229208s
	W0520 04:40:55.333633   18411 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:55.344944   18411 out.go:177] * Deleting "embed-certs-235000" in qemu2 ...
	W0520 04:40:55.369491   18411 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:55.369535   18411 start.go:728] Will try again in 5 seconds ...
	I0520 04:41:00.371630   18411 start.go:360] acquireMachinesLock for embed-certs-235000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:00.372015   18411 start.go:364] duration metric: took 307.459µs to acquireMachinesLock for "embed-certs-235000"
	I0520 04:41:00.372201   18411 start.go:93] Provisioning new machine with config: &{Name:embed-certs-235000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-235000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:41:00.372480   18411 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:41:00.382134   18411 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:41:00.431752   18411 start.go:159] libmachine.API.Create for "embed-certs-235000" (driver="qemu2")
	I0520 04:41:00.431802   18411 client.go:168] LocalClient.Create starting
	I0520 04:41:00.431907   18411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:41:00.431977   18411 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:00.431998   18411 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:00.432101   18411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:41:00.432144   18411 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:00.432159   18411 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:00.432765   18411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:41:00.584677   18411 main.go:141] libmachine: Creating SSH key...
	I0520 04:41:00.709317   18411 main.go:141] libmachine: Creating Disk image...
	I0520 04:41:00.709323   18411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:41:00.709517   18411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:41:00.722123   18411 main.go:141] libmachine: STDOUT: 
	I0520 04:41:00.722143   18411 main.go:141] libmachine: STDERR: 
	I0520 04:41:00.722195   18411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2 +20000M
	I0520 04:41:00.733322   18411 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:41:00.733337   18411 main.go:141] libmachine: STDERR: 
	I0520 04:41:00.733350   18411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:41:00.733356   18411 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:41:00.733392   18411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:90:cf:1b:63:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:41:00.735120   18411 main.go:141] libmachine: STDOUT: 
	I0520 04:41:00.735136   18411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:00.735151   18411 client.go:171] duration metric: took 303.344542ms to LocalClient.Create
	I0520 04:41:02.737452   18411 start.go:128] duration metric: took 2.364938625s to createHost
	I0520 04:41:02.737533   18411 start.go:83] releasing machines lock for "embed-certs-235000", held for 2.365526459s
	W0520 04:41:02.737821   18411 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-235000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-235000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:02.753575   18411 out.go:177] 
	W0520 04:41:02.757728   18411 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:02.757774   18411 out.go:239] * 
	* 
	W0520 04:41:02.760722   18411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:41:02.769808   18411 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-235000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (59.87275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (12.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-969000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-969000 create -f testdata/busybox.yaml: exit status 1 (33.035833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-969000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (32.771125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (32.216416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-969000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-969000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-969000 describe deploy/metrics-server -n kube-system: exit status 1 (27.718708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-969000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (28.938084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-969000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-969000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.839912s)

                                                
                                                
-- stdout --
	* [no-preload-969000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-969000" primary control-plane node in "no-preload-969000" cluster
	* Restarting existing qemu2 VM for "no-preload-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:40:56.993985   18458 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:40:56.994122   18458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:56.994125   18458 out.go:304] Setting ErrFile to fd 2...
	I0520 04:40:56.994128   18458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:40:56.994267   18458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:40:56.995263   18458 out.go:298] Setting JSON to false
	I0520 04:40:57.011287   18458 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9628,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:40:57.011345   18458 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:40:57.016258   18458 out.go:177] * [no-preload-969000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:40:57.023306   18458 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:40:57.027242   18458 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:40:57.023377   18458 notify.go:220] Checking for updates...
	I0520 04:40:57.030249   18458 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:40:57.033301   18458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:40:57.036187   18458 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:40:57.039262   18458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:40:57.042626   18458 config.go:182] Loaded profile config "no-preload-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:40:57.042903   18458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:40:57.047160   18458 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:40:57.054196   18458 start.go:297] selected driver: qemu2
	I0520 04:40:57.054203   18458 start.go:901] validating driver "qemu2" against &{Name:no-preload-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:no-preload-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:57.054257   18458 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:40:57.056486   18458 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:40:57.056515   18458 cni.go:84] Creating CNI manager for ""
	I0520 04:40:57.056522   18458 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:40:57.056540   18458 start.go:340] cluster config:
	{Name:no-preload-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-969000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:40:57.060795   18458 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.068229   18458 out.go:177] * Starting "no-preload-969000" primary control-plane node in "no-preload-969000" cluster
	I0520 04:40:57.072182   18458 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:40:57.072276   18458 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/no-preload-969000/config.json ...
	I0520 04:40:57.072285   18458 cache.go:107] acquiring lock: {Name:mk39fdd918e0ddfa85f695b38d22ed352e726f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072289   18458 cache.go:107] acquiring lock: {Name:mk444a7ecc9a22caf1d26a46ca1e133e693a2457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072325   18458 cache.go:107] acquiring lock: {Name:mk05bf596f604cffc8b3d84a74a73c6df1fcf85e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072350   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 04:40:57.072353   18458 cache.go:107] acquiring lock: {Name:mka0fdc66695a7c5c0f4e1a46eeb0a16be7e8556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072364   18458 cache.go:107] acquiring lock: {Name:mk53d40f955679581c402fc3a6c580ab4e0ed960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072381   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 04:40:57.072389   18458 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 74.416µs
	I0520 04:40:57.072395   18458 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 04:40:57.072402   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 04:40:57.072408   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 04:40:57.072414   18458 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 133.208µs
	I0520 04:40:57.072422   18458 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 04:40:57.072401   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 04:40:57.072427   18458 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 74.958µs
	I0520 04:40:57.072432   18458 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 04:40:57.072416   18458 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 53.25µs
	I0520 04:40:57.072435   18458 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 04:40:57.072357   18458 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 67.916µs
	I0520 04:40:57.072447   18458 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 04:40:57.072408   18458 cache.go:107] acquiring lock: {Name:mk543c69021fa2b9b2c9ce52d092381e1045edbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072481   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 04:40:57.072485   18458 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 107.041µs
	I0520 04:40:57.072489   18458 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 04:40:57.072499   18458 cache.go:107] acquiring lock: {Name:mk31970d30a33a1181f78fe9a9eb5a5c6558aef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072509   18458 cache.go:107] acquiring lock: {Name:mk2cd06d1ebc1058d22c38f5321f5d936cef7d23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:40:57.072553   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 04:40:57.072562   18458 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 83.708µs
	I0520 04:40:57.072565   18458 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 04:40:57.072555   18458 cache.go:115] /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 04:40:57.072569   18458 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 84.417µs
	I0520 04:40:57.072572   18458 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 04:40:57.072574   18458 cache.go:87] Successfully saved all images to host disk.
	I0520 04:40:57.072691   18458 start.go:360] acquireMachinesLock for no-preload-969000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:40:57.072725   18458 start.go:364] duration metric: took 28.209µs to acquireMachinesLock for "no-preload-969000"
	I0520 04:40:57.072734   18458 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:40:57.072740   18458 fix.go:54] fixHost starting: 
	I0520 04:40:57.072861   18458 fix.go:112] recreateIfNeeded on no-preload-969000: state=Stopped err=<nil>
	W0520 04:40:57.072868   18458 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:40:57.081269   18458 out.go:177] * Restarting existing qemu2 VM for "no-preload-969000" ...
	I0520 04:40:57.085211   18458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:5a:2b:50:3a:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:40:57.087277   18458 main.go:141] libmachine: STDOUT: 
	I0520 04:40:57.087298   18458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:40:57.087324   18458 fix.go:56] duration metric: took 14.584167ms for fixHost
	I0520 04:40:57.087328   18458 start.go:83] releasing machines lock for "no-preload-969000", held for 14.599666ms
	W0520 04:40:57.087334   18458 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:40:57.087365   18458 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:40:57.087370   18458 start.go:728] Will try again in 5 seconds ...
	I0520 04:41:02.089587   18458 start.go:360] acquireMachinesLock for no-preload-969000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:02.737687   18458 start.go:364] duration metric: took 647.981167ms to acquireMachinesLock for "no-preload-969000"
	I0520 04:41:02.737891   18458 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:41:02.737910   18458 fix.go:54] fixHost starting: 
	I0520 04:41:02.738685   18458 fix.go:112] recreateIfNeeded on no-preload-969000: state=Stopped err=<nil>
	W0520 04:41:02.738711   18458 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:41:02.753575   18458 out.go:177] * Restarting existing qemu2 VM for "no-preload-969000" ...
	I0520 04:41:02.757831   18458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:5a:2b:50:3a:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/no-preload-969000/disk.qcow2
	I0520 04:41:02.767814   18458 main.go:141] libmachine: STDOUT: 
	I0520 04:41:02.767886   18458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:02.767967   18458 fix.go:56] duration metric: took 30.062291ms for fixHost
	I0520 04:41:02.767986   18458 start.go:83] releasing machines lock for "no-preload-969000", held for 30.24425ms
	W0520 04:41:02.768251   18458 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:02.776624   18458 out.go:177] 
	W0520 04:41:02.784741   18458 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:02.784800   18458 out.go:239] * 
	* 
	W0520 04:41:02.787717   18458 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:41:02.793483   18458 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-969000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (46.519459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-235000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-235000 create -f testdata/busybox.yaml: exit status 1 (32.300875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-235000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-235000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (29.553583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (32.776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-969000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (33.780041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-969000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.645542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-969000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-969000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (30.074417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-235000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-235000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-235000 describe deploy/metrics-server -n kube-system: exit status 1 (28.984584ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-235000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-235000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (30.365416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-969000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (31.231167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-969000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-969000 --alsologtostderr -v=1: exit status 83 (39.694209ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-969000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:03.055623   18492 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:03.055780   18492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:03.055783   18492 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:03.055786   18492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:03.055904   18492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:03.056148   18492 out.go:298] Setting JSON to false
	I0520 04:41:03.056154   18492 mustload.go:65] Loading cluster: no-preload-969000
	I0520 04:41:03.056334   18492 config.go:182] Loaded profile config "no-preload-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:03.060508   18492 out.go:177] * The control-plane node no-preload-969000 host is not running: state=Stopped
	I0520 04:41:03.063546   18492 out.go:177]   To start a cluster, run: "minikube start -p no-preload-969000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-969000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (28.686542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (26.724375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-881000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-881000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.825279666s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-881000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-881000" primary control-plane node in "default-k8s-diff-port-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:03.729977   18534 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:03.730133   18534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:03.730136   18534 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:03.730138   18534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:03.730257   18534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:03.731328   18534 out.go:298] Setting JSON to false
	I0520 04:41:03.747311   18534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9634,"bootTime":1716195629,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:41:03.747374   18534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:41:03.752816   18534 out.go:177] * [default-k8s-diff-port-881000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:41:03.759789   18534 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:41:03.759828   18534 notify.go:220] Checking for updates...
	I0520 04:41:03.762862   18534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:41:03.765806   18534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:41:03.768770   18534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:41:03.771820   18534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:41:03.774843   18534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:41:03.778136   18534 config.go:182] Loaded profile config "embed-certs-235000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:03.778201   18534 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:03.778246   18534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:41:03.782826   18534 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:41:03.789687   18534 start.go:297] selected driver: qemu2
	I0520 04:41:03.789694   18534 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:41:03.789700   18534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:41:03.791945   18534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:41:03.794828   18534 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:41:03.797912   18534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:41:03.797936   18534 cni.go:84] Creating CNI manager for ""
	I0520 04:41:03.797947   18534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:41:03.797951   18534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:41:03.797989   18534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-881000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:03.802475   18534 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:41:03.809782   18534 out.go:177] * Starting "default-k8s-diff-port-881000" primary control-plane node in "default-k8s-diff-port-881000" cluster
	I0520 04:41:03.813830   18534 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:41:03.813848   18534 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:41:03.813859   18534 cache.go:56] Caching tarball of preloaded images
	I0520 04:41:03.813917   18534 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:41:03.813923   18534 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:41:03.813982   18534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/default-k8s-diff-port-881000/config.json ...
	I0520 04:41:03.813992   18534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/default-k8s-diff-port-881000/config.json: {Name:mka636129c9ba8e45402a8174605aa77e45e9aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:41:03.814332   18534 start.go:360] acquireMachinesLock for default-k8s-diff-port-881000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:03.814365   18534 start.go:364] duration metric: took 26µs to acquireMachinesLock for "default-k8s-diff-port-881000"
	I0520 04:41:03.814376   18534 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:41:03.814411   18534 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:41:03.818860   18534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:41:03.835251   18534 start.go:159] libmachine.API.Create for "default-k8s-diff-port-881000" (driver="qemu2")
	I0520 04:41:03.835271   18534 client.go:168] LocalClient.Create starting
	I0520 04:41:03.835324   18534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:41:03.835368   18534 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:03.835381   18534 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:03.835412   18534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:41:03.835438   18534 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:03.835444   18534 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:03.835770   18534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:41:03.971404   18534 main.go:141] libmachine: Creating SSH key...
	I0520 04:41:04.058618   18534 main.go:141] libmachine: Creating Disk image...
	I0520 04:41:04.058629   18534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:41:04.058818   18534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:04.071389   18534 main.go:141] libmachine: STDOUT: 
	I0520 04:41:04.071417   18534 main.go:141] libmachine: STDERR: 
	I0520 04:41:04.071466   18534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2 +20000M
	I0520 04:41:04.082670   18534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:41:04.082686   18534 main.go:141] libmachine: STDERR: 
	I0520 04:41:04.082705   18534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:04.082710   18534 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:41:04.082741   18534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:43:ab:ea:99:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:04.084481   18534 main.go:141] libmachine: STDOUT: 
	I0520 04:41:04.084496   18534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:04.084515   18534 client.go:171] duration metric: took 249.243333ms to LocalClient.Create
	I0520 04:41:06.086682   18534 start.go:128] duration metric: took 2.272278792s to createHost
	I0520 04:41:06.086769   18534 start.go:83] releasing machines lock for "default-k8s-diff-port-881000", held for 2.272423s
	W0520 04:41:06.086879   18534 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:06.098151   18534 out.go:177] * Deleting "default-k8s-diff-port-881000" in qemu2 ...
	W0520 04:41:06.124753   18534 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:06.124794   18534 start.go:728] Will try again in 5 seconds ...
	I0520 04:41:11.126927   18534 start.go:360] acquireMachinesLock for default-k8s-diff-port-881000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:11.127380   18534 start.go:364] duration metric: took 297.667µs to acquireMachinesLock for "default-k8s-diff-port-881000"
	I0520 04:41:11.127498   18534 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:41:11.127796   18534 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:41:11.137405   18534 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:41:11.187042   18534 start.go:159] libmachine.API.Create for "default-k8s-diff-port-881000" (driver="qemu2")
	I0520 04:41:11.187097   18534 client.go:168] LocalClient.Create starting
	I0520 04:41:11.187212   18534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:41:11.187288   18534 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:11.187304   18534 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:11.187361   18534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:41:11.187408   18534 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:11.187423   18534 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:11.187967   18534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:41:11.333176   18534 main.go:141] libmachine: Creating SSH key...
	I0520 04:41:11.441904   18534 main.go:141] libmachine: Creating Disk image...
	I0520 04:41:11.441910   18534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:41:11.442101   18534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:11.454860   18534 main.go:141] libmachine: STDOUT: 
	I0520 04:41:11.454881   18534 main.go:141] libmachine: STDERR: 
	I0520 04:41:11.454928   18534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2 +20000M
	I0520 04:41:11.465762   18534 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:41:11.465780   18534 main.go:141] libmachine: STDERR: 
	I0520 04:41:11.465791   18534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:11.465796   18534 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:41:11.465823   18534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f6:01:11:2a:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:11.467584   18534 main.go:141] libmachine: STDOUT: 
	I0520 04:41:11.467609   18534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:11.467622   18534 client.go:171] duration metric: took 280.522333ms to LocalClient.Create
	I0520 04:41:13.469770   18534 start.go:128] duration metric: took 2.341973917s to createHost
	I0520 04:41:13.469839   18534 start.go:83] releasing machines lock for "default-k8s-diff-port-881000", held for 2.342464083s
	W0520 04:41:13.470144   18534 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:13.476339   18534 out.go:177] 
	W0520 04:41:13.490259   18534 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:13.490285   18534 out.go:239] * 
	* 
	W0520 04:41:13.492879   18534 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:41:13.505164   18534 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-881000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (65.969709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-235000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-235000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (7.093261833s)

                                                
                                                
-- stdout --
	* [embed-certs-235000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-235000" primary control-plane node in "embed-certs-235000" cluster
	* Restarting existing qemu2 VM for "embed-certs-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:06.476966   18563 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:06.477088   18563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:06.477092   18563 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:06.477094   18563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:06.477226   18563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:06.478238   18563 out.go:298] Setting JSON to false
	I0520 04:41:06.494191   18563 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9637,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:41:06.494250   18563 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:41:06.499216   18563 out.go:177] * [embed-certs-235000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:41:06.506238   18563 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:41:06.506300   18563 notify.go:220] Checking for updates...
	I0520 04:41:06.510254   18563 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:41:06.513237   18563 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:41:06.516217   18563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:41:06.519238   18563 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:41:06.522273   18563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:41:06.525456   18563 config.go:182] Loaded profile config "embed-certs-235000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:06.525740   18563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:41:06.530197   18563 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:41:06.550270   18563 start.go:297] selected driver: qemu2
	I0520 04:41:06.550279   18563 start.go:901] validating driver "qemu2" against &{Name:embed-certs-235000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:embed-certs-235000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:06.550389   18563 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:41:06.552773   18563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:41:06.552796   18563 cni.go:84] Creating CNI manager for ""
	I0520 04:41:06.552802   18563 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:41:06.552821   18563 start.go:340] cluster config:
	{Name:embed-certs-235000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-235000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:06.557381   18563 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:41:06.563217   18563 out.go:177] * Starting "embed-certs-235000" primary control-plane node in "embed-certs-235000" cluster
	I0520 04:41:06.567222   18563 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:41:06.567238   18563 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:41:06.567255   18563 cache.go:56] Caching tarball of preloaded images
	I0520 04:41:06.567311   18563 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:41:06.567316   18563 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:41:06.567402   18563 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/embed-certs-235000/config.json ...
	I0520 04:41:06.567811   18563 start.go:360] acquireMachinesLock for embed-certs-235000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:06.567845   18563 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "embed-certs-235000"
	I0520 04:41:06.567856   18563 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:41:06.567861   18563 fix.go:54] fixHost starting: 
	I0520 04:41:06.567981   18563 fix.go:112] recreateIfNeeded on embed-certs-235000: state=Stopped err=<nil>
	W0520 04:41:06.567990   18563 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:41:06.571151   18563 out.go:177] * Restarting existing qemu2 VM for "embed-certs-235000" ...
	I0520 04:41:06.579233   18563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:90:cf:1b:63:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:41:06.581394   18563 main.go:141] libmachine: STDOUT: 
	I0520 04:41:06.581414   18563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:06.581442   18563 fix.go:56] duration metric: took 13.580208ms for fixHost
	I0520 04:41:06.581445   18563 start.go:83] releasing machines lock for "embed-certs-235000", held for 13.596041ms
	W0520 04:41:06.581452   18563 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:06.581486   18563 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:06.581490   18563 start.go:728] Will try again in 5 seconds ...
	I0520 04:41:11.583560   18563 start.go:360] acquireMachinesLock for embed-certs-235000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:13.470011   18563 start.go:364] duration metric: took 1.886388083s to acquireMachinesLock for "embed-certs-235000"
	I0520 04:41:13.470164   18563 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:41:13.470179   18563 fix.go:54] fixHost starting: 
	I0520 04:41:13.470954   18563 fix.go:112] recreateIfNeeded on embed-certs-235000: state=Stopped err=<nil>
	W0520 04:41:13.470981   18563 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:41:13.479260   18563 out.go:177] * Restarting existing qemu2 VM for "embed-certs-235000" ...
	I0520 04:41:13.493640   18563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:90:cf:1b:63:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/embed-certs-235000/disk.qcow2
	I0520 04:41:13.502728   18563 main.go:141] libmachine: STDOUT: 
	I0520 04:41:13.502786   18563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:13.502848   18563 fix.go:56] duration metric: took 32.671ms for fixHost
	I0520 04:41:13.502864   18563 start.go:83] releasing machines lock for "embed-certs-235000", held for 32.820916ms
	W0520 04:41:13.503044   18563 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-235000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-235000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:13.517174   18563 out.go:177] 
	W0520 04:41:13.521251   18563 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:13.521295   18563 out.go:239] * 
	* 
	W0520 04:41:13.523855   18563 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:41:13.531121   18563 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-235000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (56.948875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-881000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-881000 create -f testdata/busybox.yaml: exit status 1 (32.119542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-881000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-881000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (29.939125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (32.991042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-235000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (34.056959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-235000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-235000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-235000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.187208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-235000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-235000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (29.68025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-881000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-881000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-881000 describe deploy/metrics-server -n kube-system: exit status 1 (28.78275ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-881000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-881000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (31.445667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-235000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (29.698416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-235000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-235000 --alsologtostderr -v=1: exit status 83 (47.707792ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-235000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-235000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:13.808123   18596 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:13.808275   18596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:13.808278   18596 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:13.808280   18596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:13.808423   18596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:13.808657   18596 out.go:298] Setting JSON to false
	I0520 04:41:13.808663   18596 mustload.go:65] Loading cluster: embed-certs-235000
	I0520 04:41:13.808852   18596 config.go:182] Loaded profile config "embed-certs-235000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:13.812516   18596 out.go:177] * The control-plane node embed-certs-235000 host is not running: state=Stopped
	I0520 04:41:13.820372   18596 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-235000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-235000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (30.790333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (26.505292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-379000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-379000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.75235825s)

                                                
                                                
-- stdout --
	* [newest-cni-379000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-379000" primary control-plane node in "newest-cni-379000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-379000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:14.256035   18627 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:14.256169   18627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:14.256173   18627 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:14.256175   18627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:14.256293   18627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:14.257448   18627 out.go:298] Setting JSON to false
	I0520 04:41:14.273319   18627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9645,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:41:14.273386   18627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:41:14.278506   18627 out.go:177] * [newest-cni-379000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:41:14.285400   18627 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:41:14.285507   18627 notify.go:220] Checking for updates...
	I0520 04:41:14.289417   18627 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:41:14.292270   18627 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:41:14.295370   18627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:41:14.298377   18627 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:41:14.301233   18627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:41:14.304717   18627 config.go:182] Loaded profile config "default-k8s-diff-port-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:14.304775   18627 config.go:182] Loaded profile config "multinode-182000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:14.304824   18627 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:41:14.309313   18627 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 04:41:14.316335   18627 start.go:297] selected driver: qemu2
	I0520 04:41:14.316342   18627 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:41:14.316348   18627 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:41:14.318511   18627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0520 04:41:14.318535   18627 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0520 04:41:14.322403   18627 out.go:177] * Automatically selected the socket_vmnet network
	I0520 04:41:14.329409   18627 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 04:41:14.329432   18627 cni.go:84] Creating CNI manager for ""
	I0520 04:41:14.329438   18627 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:41:14.329446   18627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:41:14.329479   18627 start.go:340] cluster config:
	{Name:newest-cni-379000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-379000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:14.333871   18627 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:41:14.341364   18627 out.go:177] * Starting "newest-cni-379000" primary control-plane node in "newest-cni-379000" cluster
	I0520 04:41:14.345393   18627 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:41:14.345409   18627 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:41:14.345426   18627 cache.go:56] Caching tarball of preloaded images
	I0520 04:41:14.345488   18627 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:41:14.345493   18627 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:41:14.345552   18627 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/newest-cni-379000/config.json ...
	I0520 04:41:14.345564   18627 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/newest-cni-379000/config.json: {Name:mke284467f5d4a9b74227b26693d781e8309f179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:41:14.345773   18627 start.go:360] acquireMachinesLock for newest-cni-379000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:14.345806   18627 start.go:364] duration metric: took 27.209µs to acquireMachinesLock for "newest-cni-379000"
	I0520 04:41:14.345819   18627 start.go:93] Provisioning new machine with config: &{Name:newest-cni-379000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-379000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:41:14.345849   18627 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:41:14.354338   18627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:41:14.371478   18627 start.go:159] libmachine.API.Create for "newest-cni-379000" (driver="qemu2")
	I0520 04:41:14.371498   18627 client.go:168] LocalClient.Create starting
	I0520 04:41:14.371560   18627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:41:14.371591   18627 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:14.371606   18627 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:14.371661   18627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:41:14.371683   18627 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:14.371690   18627 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:14.372139   18627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:41:14.513223   18627 main.go:141] libmachine: Creating SSH key...
	I0520 04:41:14.579297   18627 main.go:141] libmachine: Creating Disk image...
	I0520 04:41:14.579303   18627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:41:14.579499   18627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:14.591886   18627 main.go:141] libmachine: STDOUT: 
	I0520 04:41:14.591907   18627 main.go:141] libmachine: STDERR: 
	I0520 04:41:14.591970   18627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2 +20000M
	I0520 04:41:14.603002   18627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:41:14.603016   18627 main.go:141] libmachine: STDERR: 
	I0520 04:41:14.603036   18627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:14.603041   18627 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:41:14.603080   18627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:20:c4:cc:94:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:14.604836   18627 main.go:141] libmachine: STDOUT: 
	I0520 04:41:14.604852   18627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:14.604871   18627 client.go:171] duration metric: took 233.371792ms to LocalClient.Create
	I0520 04:41:16.607055   18627 start.go:128] duration metric: took 2.261209167s to createHost
	I0520 04:41:16.607129   18627 start.go:83] releasing machines lock for "newest-cni-379000", held for 2.261341667s
	W0520 04:41:16.607235   18627 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:16.621737   18627 out.go:177] * Deleting "newest-cni-379000" in qemu2 ...
	W0520 04:41:16.649603   18627 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:16.649631   18627 start.go:728] Will try again in 5 seconds ...
	I0520 04:41:21.651760   18627 start.go:360] acquireMachinesLock for newest-cni-379000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:21.662669   18627 start.go:364] duration metric: took 10.800958ms to acquireMachinesLock for "newest-cni-379000"
	I0520 04:41:21.662726   18627 start.go:93] Provisioning new machine with config: &{Name:newest-cni-379000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-379000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 04:41:21.662931   18627 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 04:41:21.670890   18627 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 04:41:21.716360   18627 start.go:159] libmachine.API.Create for "newest-cni-379000" (driver="qemu2")
	I0520 04:41:21.716413   18627 client.go:168] LocalClient.Create starting
	I0520 04:41:21.716547   18627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/ca.pem
	I0520 04:41:21.716611   18627 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:21.716635   18627 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:21.716707   18627 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18932-14402/.minikube/certs/cert.pem
	I0520 04:41:21.716750   18627 main.go:141] libmachine: Decoding PEM data...
	I0520 04:41:21.716765   18627 main.go:141] libmachine: Parsing certificate...
	I0520 04:41:21.717315   18627 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 04:41:21.867083   18627 main.go:141] libmachine: Creating SSH key...
	I0520 04:41:21.914547   18627 main.go:141] libmachine: Creating Disk image...
	I0520 04:41:21.914556   18627 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 04:41:21.914765   18627 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2.raw /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:21.928199   18627 main.go:141] libmachine: STDOUT: 
	I0520 04:41:21.928228   18627 main.go:141] libmachine: STDERR: 
	I0520 04:41:21.928310   18627 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2 +20000M
	I0520 04:41:21.941090   18627 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 04:41:21.941113   18627 main.go:141] libmachine: STDERR: 
	I0520 04:41:21.941125   18627 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:21.941133   18627 main.go:141] libmachine: Starting QEMU VM...
	I0520 04:41:21.941170   18627 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:71:54:26:e2:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:21.943073   18627 main.go:141] libmachine: STDOUT: 
	I0520 04:41:21.943093   18627 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:21.943105   18627 client.go:171] duration metric: took 226.688958ms to LocalClient.Create
	I0520 04:41:23.945423   18627 start.go:128] duration metric: took 2.282432833s to createHost
	I0520 04:41:23.945538   18627 start.go:83] releasing machines lock for "newest-cni-379000", held for 2.282876333s
	W0520 04:41:23.945856   18627 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-379000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-379000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:23.954548   18627 out.go:177] 
	W0520 04:41:23.957552   18627 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:23.957579   18627 out.go:239] * 
	* 
	W0520 04:41:23.960012   18627 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:41:23.968601   18627 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-379000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000: exit status 7 (66.697458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-379000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-881000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-881000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.663176917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-881000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-881000" primary control-plane node in "default-k8s-diff-port-881000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:16.066187   18647 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:16.066317   18647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:16.066320   18647 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:16.066322   18647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:16.066444   18647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:16.067434   18647 out.go:298] Setting JSON to false
	I0520 04:41:16.083467   18647 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9647,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:41:16.083565   18647 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:41:16.088007   18647 out.go:177] * [default-k8s-diff-port-881000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:41:16.094913   18647 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:41:16.098802   18647 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:41:16.095023   18647 notify.go:220] Checking for updates...
	I0520 04:41:16.104864   18647 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:41:16.107892   18647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:41:16.110857   18647 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:41:16.113893   18647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:41:16.117231   18647 config.go:182] Loaded profile config "default-k8s-diff-port-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:16.117510   18647 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:41:16.120847   18647 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:41:16.127849   18647 start.go:297] selected driver: qemu2
	I0520 04:41:16.127855   18647 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:16.127916   18647 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:41:16.130267   18647 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 04:41:16.130289   18647 cni.go:84] Creating CNI manager for ""
	I0520 04:41:16.130297   18647 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:41:16.130322   18647 start.go:340] cluster config:
	{Name:default-k8s-diff-port-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-881000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:16.134604   18647 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:41:16.141890   18647 out.go:177] * Starting "default-k8s-diff-port-881000" primary control-plane node in "default-k8s-diff-port-881000" cluster
	I0520 04:41:16.145701   18647 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:41:16.145715   18647 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:41:16.145727   18647 cache.go:56] Caching tarball of preloaded images
	I0520 04:41:16.145773   18647 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:41:16.145780   18647 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:41:16.145830   18647 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/default-k8s-diff-port-881000/config.json ...
	I0520 04:41:16.146281   18647 start.go:360] acquireMachinesLock for default-k8s-diff-port-881000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:16.607274   18647 start.go:364] duration metric: took 460.96325ms to acquireMachinesLock for "default-k8s-diff-port-881000"
	I0520 04:41:16.607450   18647 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:41:16.607504   18647 fix.go:54] fixHost starting: 
	I0520 04:41:16.608175   18647 fix.go:112] recreateIfNeeded on default-k8s-diff-port-881000: state=Stopped err=<nil>
	W0520 04:41:16.608220   18647 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:41:16.613791   18647 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-881000" ...
	I0520 04:41:16.625811   18647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f6:01:11:2a:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:16.636201   18647 main.go:141] libmachine: STDOUT: 
	I0520 04:41:16.636265   18647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:16.636386   18647 fix.go:56] duration metric: took 28.891375ms for fixHost
	I0520 04:41:16.636404   18647 start.go:83] releasing machines lock for "default-k8s-diff-port-881000", held for 29.099417ms
	W0520 04:41:16.636429   18647 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:16.636587   18647 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:16.636604   18647 start.go:728] Will try again in 5 seconds ...
	I0520 04:41:21.637746   18647 start.go:360] acquireMachinesLock for default-k8s-diff-port-881000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:21.638253   18647 start.go:364] duration metric: took 362.916µs to acquireMachinesLock for "default-k8s-diff-port-881000"
	I0520 04:41:21.638402   18647 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:41:21.638423   18647 fix.go:54] fixHost starting: 
	I0520 04:41:21.639204   18647 fix.go:112] recreateIfNeeded on default-k8s-diff-port-881000: state=Stopped err=<nil>
	W0520 04:41:21.639229   18647 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:41:21.648857   18647 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-881000" ...
	I0520 04:41:21.652921   18647 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f6:01:11:2a:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/default-k8s-diff-port-881000/disk.qcow2
	I0520 04:41:21.662434   18647 main.go:141] libmachine: STDOUT: 
	I0520 04:41:21.662495   18647 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:21.662583   18647 fix.go:56] duration metric: took 24.162458ms for fixHost
	I0520 04:41:21.662603   18647 start.go:83] releasing machines lock for "default-k8s-diff-port-881000", held for 24.326458ms
	W0520 04:41:21.662753   18647 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:21.677933   18647 out.go:177] 
	W0520 04:41:21.681784   18647 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:21.681802   18647 out.go:239] * 
	* 
	W0520 04:41:21.683344   18647 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:41:21.692823   18647 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-881000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (46.140583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-881000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (32.372375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-881000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-881000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-881000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.939833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-881000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-881000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (32.610125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-881000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (28.947375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-881000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-881000 --alsologtostderr -v=1: exit status 83 (43.745333ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-881000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-881000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:21.943809   18672 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:21.943964   18672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:21.943968   18672 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:21.943970   18672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:21.944110   18672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:21.944312   18672 out.go:298] Setting JSON to false
	I0520 04:41:21.944318   18672 mustload.go:65] Loading cluster: default-k8s-diff-port-881000
	I0520 04:41:21.944513   18672 config.go:182] Loaded profile config "default-k8s-diff-port-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:21.949793   18672 out.go:177] * The control-plane node default-k8s-diff-port-881000 host is not running: state=Stopped
	I0520 04:41:21.953829   18672 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-881000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-881000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (28.005125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (28.0135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-379000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-379000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.185775583s)

                                                
                                                
-- stdout --
	* [newest-cni-379000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-379000" primary control-plane node in "newest-cni-379000" cluster
	* Restarting existing qemu2 VM for "newest-cni-379000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-379000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:27.121383   18727 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:27.121505   18727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:27.121509   18727 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:27.121511   18727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:27.121649   18727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:27.122691   18727 out.go:298] Setting JSON to false
	I0520 04:41:27.138736   18727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9658,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:41:27.138797   18727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:41:27.143572   18727 out.go:177] * [newest-cni-379000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:41:27.153558   18727 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:41:27.157602   18727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:41:27.153613   18727 notify.go:220] Checking for updates...
	I0520 04:41:27.163550   18727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:41:27.166607   18727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:41:27.169456   18727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:41:27.172565   18727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:41:27.175850   18727 config.go:182] Loaded profile config "newest-cni-379000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:27.176120   18727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:41:27.179521   18727 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:41:27.186564   18727 start.go:297] selected driver: qemu2
	I0520 04:41:27.186573   18727 start.go:901] validating driver "qemu2" against &{Name:newest-cni-379000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:newest-cni-379000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:27.186639   18727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:41:27.188927   18727 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 04:41:27.188953   18727 cni.go:84] Creating CNI manager for ""
	I0520 04:41:27.188964   18727 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:41:27.189002   18727 start.go:340] cluster config:
	{Name:newest-cni-379000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-379000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:41:27.193251   18727 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:41:27.200544   18727 out.go:177] * Starting "newest-cni-379000" primary control-plane node in "newest-cni-379000" cluster
	I0520 04:41:27.204391   18727 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:41:27.204429   18727 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:41:27.204437   18727 cache.go:56] Caching tarball of preloaded images
	I0520 04:41:27.204509   18727 preload.go:173] Found /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 04:41:27.204515   18727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:41:27.204591   18727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/newest-cni-379000/config.json ...
	I0520 04:41:27.205043   18727 start.go:360] acquireMachinesLock for newest-cni-379000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:27.205078   18727 start.go:364] duration metric: took 28.209µs to acquireMachinesLock for "newest-cni-379000"
	I0520 04:41:27.205088   18727 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:41:27.205094   18727 fix.go:54] fixHost starting: 
	I0520 04:41:27.205216   18727 fix.go:112] recreateIfNeeded on newest-cni-379000: state=Stopped err=<nil>
	W0520 04:41:27.205229   18727 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:41:27.208629   18727 out.go:177] * Restarting existing qemu2 VM for "newest-cni-379000" ...
	I0520 04:41:27.216756   18727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:71:54:26:e2:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:27.218940   18727 main.go:141] libmachine: STDOUT: 
	I0520 04:41:27.218961   18727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:27.218988   18727 fix.go:56] duration metric: took 13.8945ms for fixHost
	I0520 04:41:27.218993   18727 start.go:83] releasing machines lock for "newest-cni-379000", held for 13.910375ms
	W0520 04:41:27.219000   18727 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:27.219042   18727 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:27.219047   18727 start.go:728] Will try again in 5 seconds ...
	I0520 04:41:32.221130   18727 start.go:360] acquireMachinesLock for newest-cni-379000: {Name:mk432218336963276ef8fedc0621fac2ef19cf58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 04:41:32.221468   18727 start.go:364] duration metric: took 269.334µs to acquireMachinesLock for "newest-cni-379000"
	I0520 04:41:32.221568   18727 start.go:96] Skipping create...Using existing machine configuration
	I0520 04:41:32.221587   18727 fix.go:54] fixHost starting: 
	I0520 04:41:32.222368   18727 fix.go:112] recreateIfNeeded on newest-cni-379000: state=Stopped err=<nil>
	W0520 04:41:32.222393   18727 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 04:41:32.226800   18727 out.go:177] * Restarting existing qemu2 VM for "newest-cni-379000" ...
	I0520 04:41:32.234879   18727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:71:54:26:e2:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18932-14402/.minikube/machines/newest-cni-379000/disk.qcow2
	I0520 04:41:32.243782   18727 main.go:141] libmachine: STDOUT: 
	I0520 04:41:32.243860   18727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 04:41:32.243945   18727 fix.go:56] duration metric: took 22.358291ms for fixHost
	I0520 04:41:32.243969   18727 start.go:83] releasing machines lock for "newest-cni-379000", held for 22.477583ms
	W0520 04:41:32.244163   18727 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-379000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-379000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 04:41:32.251838   18727 out.go:177] 
	W0520 04:41:32.255773   18727 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 04:41:32.255796   18727 out.go:239] * 
	* 
	W0520 04:41:32.258181   18727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:41:32.265773   18727 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-379000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000: exit status 7 (67.874375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-379000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-379000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000: exit status 7 (29.947625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-379000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-379000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-379000 --alsologtostderr -v=1: exit status 83 (41.325584ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-379000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-379000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:41:32.446542   18741 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:41:32.446752   18741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:32.446755   18741 out.go:304] Setting ErrFile to fd 2...
	I0520 04:41:32.446758   18741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:41:32.446885   18741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:41:32.447094   18741 out.go:298] Setting JSON to false
	I0520 04:41:32.447100   18741 mustload.go:65] Loading cluster: newest-cni-379000
	I0520 04:41:32.447292   18741 config.go:182] Loaded profile config "newest-cni-379000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:41:32.451693   18741 out.go:177] * The control-plane node newest-cni-379000 host is not running: state=Stopped
	I0520 04:41:32.455650   18741 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-379000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-379000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000: exit status 7 (29.020125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-379000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000: exit status 7 (29.232375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-379000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.1/json-events 6.28
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.08
18 TestDownloadOnly/v1.30.1/DeleteAll 0.23
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.36
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.05
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.23
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.74
55 TestFunctional/serial/CacheCmd/cache/add_local 1.17
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.26
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.22
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.33
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.16
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.53
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.38
258 TestNoKubernetes/serial/Stop 3.64
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.63
275 TestStartStop/group/old-k8s-version/serial/Stop 3.79
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
288 TestStartStop/group/no-preload/serial/Stop 3.58
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
297 TestStartStop/group/embed-certs/serial/Stop 3.28
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.1
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 2.86
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-078000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-078000: exit status 85 (93.813334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |          |
	|         | -p download-only-078000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:15:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:15:12.589288   14897 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:15:12.589440   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:12.589443   14897 out.go:304] Setting ErrFile to fd 2...
	I0520 04:15:12.589446   14897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:12.589555   14897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	W0520 04:15:12.589644   14897 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18932-14402/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18932-14402/.minikube/config/config.json: no such file or directory
	I0520 04:15:12.590874   14897 out.go:298] Setting JSON to true
	I0520 04:15:12.608642   14897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8083,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:15:12.608713   14897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:15:12.614062   14897 out.go:97] [download-only-078000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:15:12.617365   14897 out.go:169] MINIKUBE_LOCATION=18932
	I0520 04:15:12.614247   14897 notify.go:220] Checking for updates...
	W0520 04:15:12.614256   14897 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 04:15:12.626661   14897 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:15:12.630184   14897 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:15:12.633046   14897 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:15:12.636881   14897 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	W0520 04:15:12.645016   14897 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 04:15:12.645207   14897 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:15:12.648094   14897 out.go:97] Using the qemu2 driver based on user configuration
	I0520 04:15:12.648113   14897 start.go:297] selected driver: qemu2
	I0520 04:15:12.648127   14897 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:15:12.648183   14897 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:15:12.651106   14897 out.go:169] Automatically selected the socket_vmnet network
	I0520 04:15:12.654796   14897 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 04:15:12.654886   14897 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:15:12.654918   14897 cni.go:84] Creating CNI manager for ""
	I0520 04:15:12.654934   14897 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 04:15:12.654976   14897 start.go:340] cluster config:
	{Name:download-only-078000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-078000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:15:12.659911   14897 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:15:12.664149   14897 out.go:97] Downloading VM boot image ...
	I0520 04:15:12.664164   14897 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso
	I0520 04:15:16.943341   14897 out.go:97] Starting "download-only-078000" primary control-plane node in "download-only-078000" cluster
	I0520 04:15:16.943366   14897 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:15:16.999037   14897 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:15:16.999050   14897 cache.go:56] Caching tarball of preloaded images
	I0520 04:15:16.999871   14897 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:15:17.011017   14897 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 04:15:17.011024   14897 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:17.089237   14897 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 04:15:22.199198   14897 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:22.199362   14897 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:22.896045   14897 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 04:15:22.896268   14897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/download-only-078000/config.json ...
	I0520 04:15:22.896285   14897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/download-only-078000/config.json: {Name:mkd359158ddefb93e2ed43be99a3144ab2d9a0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:15:22.896552   14897 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 04:15:22.897422   14897 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0520 04:15:23.253992   14897 out.go:169] 
	W0520 04:15:23.258288   14897 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108545380 0x108545380 0x108545380 0x108545380 0x108545380 0x108545380 0x108545380] Decompressors:map[bz2:0x14000906230 gz:0x14000906238 tar:0x14000906180 tar.bz2:0x140009061a0 tar.gz:0x140009061c0 tar.xz:0x140009061e0 tar.zst:0x14000906210 tbz2:0x140009061a0 tgz:0x140009061c0 txz:0x140009061e0 tzst:0x14000906210 xz:0x14000906250 zip:0x14000906280 zst:0x14000906258] Getters:map[file:0x140012046c0 http:0x140005242d0 https:0x14000524320] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 04:15:23.258314   14897 out_reason.go:110] 
	W0520 04:15:23.266311   14897 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 04:15:23.270147   14897 out.go:169] 
	
	
	* The control-plane node download-only-078000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-078000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-078000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (6.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-998000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-998000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 : (6.279797041s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (6.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-998000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-998000: exit status 85 (78.503791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | -p download-only-078000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| delete  | -p download-only-078000        | download-only-078000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT | 20 May 24 04:15 PDT |
	| start   | -o=json --download-only        | download-only-998000 | jenkins | v1.33.1 | 20 May 24 04:15 PDT |                     |
	|         | -p download-only-998000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 04:15:23
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 04:15:23.922496   14931 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:15:23.922650   14931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:23.922653   14931 out.go:304] Setting ErrFile to fd 2...
	I0520 04:15:23.922656   14931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:15:23.922774   14931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:15:23.923830   14931 out.go:298] Setting JSON to true
	I0520 04:15:23.939998   14931 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8094,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:15:23.940057   14931 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:15:23.944200   14931 out.go:97] [download-only-998000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:15:23.948119   14931 out.go:169] MINIKUBE_LOCATION=18932
	I0520 04:15:23.944314   14931 notify.go:220] Checking for updates...
	I0520 04:15:23.955089   14931 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:15:23.958185   14931 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:15:23.961195   14931 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:15:23.964108   14931 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	W0520 04:15:23.970186   14931 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 04:15:23.970392   14931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:15:23.973126   14931 out.go:97] Using the qemu2 driver based on user configuration
	I0520 04:15:23.973133   14931 start.go:297] selected driver: qemu2
	I0520 04:15:23.973136   14931 start.go:901] validating driver "qemu2" against <nil>
	I0520 04:15:23.973172   14931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 04:15:23.976109   14931 out.go:169] Automatically selected the socket_vmnet network
	I0520 04:15:23.981264   14931 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 04:15:23.981350   14931 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 04:15:23.981372   14931 cni.go:84] Creating CNI manager for ""
	I0520 04:15:23.981380   14931 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 04:15:23.981386   14931 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 04:15:23.981435   14931 start.go:340] cluster config:
	{Name:download-only-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:15:23.985791   14931 iso.go:125] acquiring lock: {Name:mk0fa6b85ecf94b5d4d3c3f55fc55f58d9c26dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 04:15:23.987060   14931 out.go:97] Starting "download-only-998000" primary control-plane node in "download-only-998000" cluster
	I0520 04:15:23.987066   14931 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:15:24.042201   14931 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:15:24.042223   14931 cache.go:56] Caching tarball of preloaded images
	I0520 04:15:24.042399   14931 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:15:24.047503   14931 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 04:15:24.047510   14931 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:24.117380   14931 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 04:15:28.317932   14931 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:28.318153   14931 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 04:15:28.860005   14931 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 04:15:28.860191   14931 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/download-only-998000/config.json ...
	I0520 04:15:28.860211   14931 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18932-14402/.minikube/profiles/download-only-998000/config.json: {Name:mk664806c310c6a8c7aa726def677ac5a446b385 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 04:15:28.860473   14931 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 04:15:28.860586   14931 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18932-14402/.minikube/cache/darwin/arm64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-998000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-998000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-998000
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-947000 --alsologtostderr --binary-mirror http://127.0.0.1:52781 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-947000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-947000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-313000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-313000: exit status 85 (57.365958ms)

                                                
                                                
-- stdout --
	* Profile "addons-313000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-313000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-313000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-313000: exit status 85 (61.176208ms)

                                                
                                                
-- stdout --
	* Profile "addons-313000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-313000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.05s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status: exit status 7 (30.75275ms)

                                                
                                                
-- stdout --
	nospam-630000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status: exit status 7 (29.642542ms)

                                                
                                                
-- stdout --
	nospam-630000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status: exit status 7 (28.804417ms)

                                                
                                                
-- stdout --
	nospam-630000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause: exit status 83 (39.895834ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-630000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-630000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause: exit status 83 (37.93975ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-630000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-630000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause: exit status 83 (38.686917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-630000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-630000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause: exit status 83 (37.749708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-630000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-630000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause: exit status 83 (38.900875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-630000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-630000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause: exit status 83 (37.935708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-630000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-630000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 stop: (1.780849834s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 stop: (3.943933958s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-630000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-630000 stop: (3.507513042s)
--- PASS: TestErrorSpam/stop (9.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18932-14402/.minikube/files/etc/test/nested/copy/14895/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2618415604/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cache add minikube-local-cache-test:functional-873000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 cache delete minikube-local-cache-test:functional-873000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-873000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 config get cpus: exit status 14 (29.7295ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 config get cpus: exit status 14 (38.152792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-873000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-873000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (154.690084ms)

                                                
                                                
-- stdout --
	* [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:17:16.344439   15544 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:17:16.344618   15544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.344626   15544 out.go:304] Setting ErrFile to fd 2...
	I0520 04:17:16.344629   15544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.344790   15544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:17:16.346025   15544 out.go:298] Setting JSON to false
	I0520 04:17:16.365419   15544 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8207,"bootTime":1716195629,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:17:16.365485   15544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:17:16.369083   15544 out.go:177] * [functional-873000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 04:17:16.376061   15544 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:17:16.379954   15544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:17:16.376120   15544 notify.go:220] Checking for updates...
	I0520 04:17:16.382980   15544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:17:16.386036   15544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:17:16.388988   15544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:17:16.392013   15544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:17:16.395174   15544 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:17:16.395482   15544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:17:16.399993   15544 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 04:17:16.406847   15544 start.go:297] selected driver: qemu2
	I0520 04:17:16.406853   15544 start.go:901] validating driver "qemu2" against &{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:17:16.406902   15544 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:17:16.412975   15544 out.go:177] 
	W0520 04:17:16.415960   15544 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 04:17:16.419920   15544 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-873000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-873000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-873000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (104.737584ms)

                                                
                                                
-- stdout --
	* [functional-873000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 04:17:16.559339   15555 out.go:291] Setting OutFile to fd 1 ...
	I0520 04:17:16.559456   15555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.559460   15555 out.go:304] Setting ErrFile to fd 2...
	I0520 04:17:16.559462   15555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 04:17:16.559585   15555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18932-14402/.minikube/bin
	I0520 04:17:16.560847   15555 out.go:298] Setting JSON to false
	I0520 04:17:16.577341   15555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8207,"bootTime":1716195629,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 04:17:16.577422   15555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 04:17:16.581980   15555 out.go:177] * [functional-873000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0520 04:17:16.588799   15555 out.go:177]   - MINIKUBE_LOCATION=18932
	I0520 04:17:16.588902   15555 notify.go:220] Checking for updates...
	I0520 04:17:16.592988   15555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	I0520 04:17:16.595962   15555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 04:17:16.597293   15555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 04:17:16.599932   15555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	I0520 04:17:16.602970   15555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 04:17:16.606221   15555 config.go:182] Loaded profile config "functional-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 04:17:16.606456   15555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 04:17:16.610913   15555 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0520 04:17:16.617916   15555 start.go:297] selected driver: qemu2
	I0520 04:17:16.617924   15555 start.go:901] validating driver "qemu2" against &{Name:functional-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 04:17:16.617975   15555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 04:17:16.623912   15555 out.go:177] 
	W0520 04:17:16.627974   15555 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 04:17:16.631851   15555 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.293149042s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-873000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image rm gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-873000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 image save --daemon gcr.io/google-containers/addon-resizer:functional-873000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-873000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "69.498125ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.791958ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "68.499042ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.773833ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013335541s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-873000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-873000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-873000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-873000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.53s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-152000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-152000 --output=json --user=testUser: (3.528199667s)
--- PASS: TestJSONOutput/stop/Command (3.53s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-726000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-726000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.841333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b231e6f7-1f83-4e31-8854-cf1c0ba2a1a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-726000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"285226d3-e7ac-4224-84a3-e539c61862c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18932"}}
	{"specversion":"1.0","id":"6bb63402-1d77-4e6a-8b25-abdab83177f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig"}}
	{"specversion":"1.0","id":"4bc8c0c4-7a29-4274-86b2-1d5e4f78f3a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4df73ed7-1a96-4e95-a151-5534132bee7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"76d348ec-deb6-4999-88c4-b4c737fc1a21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube"}}
	{"specversion":"1.0","id":"38dc3396-d6a6-4e39-b11e-c8bd1347b148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dd1872d7-adcc-4513-92dc-d5479031b8ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-726000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-726000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-217000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (95.920417ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-217000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18932
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18932-14402/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18932-14402/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-217000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-217000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.427125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-217000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-217000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.681291542s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.698936125s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-217000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-217000: (3.641299041s)
--- PASS: TestNoKubernetes/serial/Stop (3.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-217000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-217000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.602541ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-217000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-217000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-484000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-178000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-178000 --alsologtostderr -v=3: (3.792742625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-178000 -n old-k8s-version-178000: exit status 7 (50.271916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-178000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-969000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-969000 --alsologtostderr -v=3: (3.581948833s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-969000 -n no-preload-969000: exit status 7 (54.960334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-969000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-235000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-235000 --alsologtostderr -v=3: (3.2822015s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-235000 -n embed-certs-235000: exit status 7 (56.058417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-235000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-881000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-881000 --alsologtostderr -v=3: (2.102642375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-881000 -n default-k8s-diff-port-881000: exit status 7 (60.993334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-881000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-379000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-379000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-379000 --alsologtostderr -v=3: (2.863022625s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-379000 -n newest-cni-379000: exit status 7 (58.438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-379000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2868883340/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716203795828824000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2868883340/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716203795828824000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2868883340/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716203795828824000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2868883340/001/test-1716203795828824000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (56.335084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.1015ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.367709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.039916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.9435ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.555667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.142375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo umount -f /mount-9p": exit status 83 (47.690709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2868883340/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1045029012/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (60.648083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.883ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.534708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.689791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.537666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.385792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (81.463209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.687292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "sudo umount -f /mount-9p": exit status 83 (50.495792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-873000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1045029012/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4146261664/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4146261664/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4146261664/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1: exit status 83 (85.001917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1: exit status 83 (83.549792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1: exit status 83 (83.010917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1: exit status 83 (87.941167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1: exit status 83 (85.851166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1: exit status 83 (86.034417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-873000 ssh "findmnt -T" /mount1: exit status 83 (85.846375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-873000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4146261664/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4146261664/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-873000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4146261664/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.34s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-645000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-645000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-645000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-645000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-645000"

                                                
                                                
----------------------- debugLogs end: cilium-645000 [took: 2.284650541s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-645000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-645000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-460000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-460000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard