Test Report: QEMU_macOS 18925

                    
                      9bd6871c0608907332c6bb982838c8ee113ad42f:2024-05-20:34544
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.44
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.91
27 TestAddons/Setup 10.03
28 TestCertOptions 10.17
29 TestCertExpiration 195.21
30 TestDockerFlags 10.14
31 TestForceSystemdFlag 10.67
32 TestForceSystemdEnv 10.03
38 TestErrorSpam/setup 9.87
47 TestFunctional/serial/StartWithProxy 9.92
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.81
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.07
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.19
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.28
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.27
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 89.95
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.49
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.92
141 TestMultiControlPlane/serial/StartCluster 9.97
142 TestMultiControlPlane/serial/DeployApp 109.06
143 TestMultiControlPlane/serial/PingHostFromPods 0.08
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 37.93
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.57
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 3.5
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 9.84
165 TestJSONOutput/start/Command 9.94
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.04
194 TestMinikubeProfile 10.23
197 TestMountStart/serial/StartWithMountFirst 9.98
200 TestMultiNode/serial/FreshStart2Nodes 9.97
201 TestMultiNode/serial/DeployApp2Nodes 111.03
202 TestMultiNode/serial/PingHostFrom2Pods 0.08
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.13
208 TestMultiNode/serial/StartAfterStop 44.3
209 TestMultiNode/serial/RestartKeepsNodes 8.97
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.2
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 20.13
217 TestPreload 10.17
219 TestScheduledStopUnix 9.89
220 TestSkaffold 12.41
223 TestRunningBinaryUpgrade 588.76
225 TestKubernetesUpgrade 19.04
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.98
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.98
241 TestStoppedBinaryUpgrade/Upgrade 573.39
243 TestPause/serial/Start 9.87
253 TestNoKubernetes/serial/StartWithK8s 9.75
254 TestNoKubernetes/serial/StartWithStopK8s 5.29
255 TestNoKubernetes/serial/Start 5.3
259 TestNoKubernetes/serial/StartNoArgs 5.31
261 TestNetworkPlugins/group/auto/Start 9.73
262 TestNetworkPlugins/group/kindnet/Start 9.76
263 TestNetworkPlugins/group/calico/Start 9.91
264 TestNetworkPlugins/group/custom-flannel/Start 9.8
265 TestNetworkPlugins/group/false/Start 9.96
266 TestNetworkPlugins/group/enable-default-cni/Start 9.78
267 TestNetworkPlugins/group/flannel/Start 9.78
268 TestNetworkPlugins/group/bridge/Start 9.96
269 TestNetworkPlugins/group/kubenet/Start 9.71
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.75
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.89
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/no-preload/serial/SecondStart 5.24
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.1
294 TestStartStop/group/embed-certs/serial/FirstStart 9.96
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.83
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
304 TestStartStop/group/embed-certs/serial/SecondStart 5.25
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.33
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.06
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 9.97
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.09
321 TestStartStop/group/newest-cni/serial/SecondStart 5.25
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-699000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-699000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.436374292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9dca442f-a959-480c-8f28-58e080c88273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-699000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e45acf6-296e-414d-ad8f-48134d9388b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"9e194e02-3d5a-4e40-b102-e94dff702866","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig"}}
	{"specversion":"1.0","id":"1c096b09-16e7-4e8f-b15e-773b9a11320b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"dbc5d02e-8c78-4c0e-a494-b7465cb05536","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"76996919-9a27-471e-b1e5-fae4ad3f7d02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube"}}
	{"specversion":"1.0","id":"b0038b2e-885e-4923-ab04-3cc011cbdedd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e098aeb7-28d3-43b8-bcd9-7d35650f05a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"23ab645a-eed9-4682-b237-2f8395545e8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ef9f3822-0ea9-44ba-b5a6-c0746ac06cda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b3dcf64-520b-4637-a995-2bb39a3779c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-699000\" primary control-plane node in \"download-only-699000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb7bd01b-8ae4-49bc-8e65-c4e26465064f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e121bc08-c8b5-490f-9aac-e11807d1a1ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380] Decompressors:map[bz2:0x140009152d0 gz:0x140009152d8 tar:0x14000915280 tar.bz2:0x14000915290 tar.gz:0x140009152a0 tar.xz:0x140009152b0 tar.zst:0x140009152c0 tbz2:0x14000915290 tgz:0x14
0009152a0 txz:0x140009152b0 tzst:0x140009152c0 xz:0x140009152e0 zip:0x140009152f0 zst:0x140009152e8] Getters:map[file:0x140007dab50 http:0x1400089a190 https:0x1400089a1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"a534315e-0bae-406f-be9d-9dab53b9766b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:19:00.583330    5824 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:19:00.583477    5824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:00.583480    5824 out.go:304] Setting ErrFile to fd 2...
	I0520 03:19:00.583482    5824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:00.583595    5824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	W0520 03:19:00.583666    5824 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18925-5286/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18925-5286/.minikube/config/config.json: no such file or directory
	I0520 03:19:00.584870    5824 out.go:298] Setting JSON to true
	I0520 03:19:00.601983    5824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4711,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:19:00.602060    5824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:19:00.615214    5824 out.go:97] [download-only-699000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:19:00.617536    5824 out.go:169] MINIKUBE_LOCATION=18925
	W0520 03:19:00.615390    5824 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 03:19:00.615404    5824 notify.go:220] Checking for updates...
	I0520 03:19:00.646351    5824 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:19:00.650172    5824 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:19:00.654198    5824 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:19:00.658274    5824 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	W0520 03:19:00.665197    5824 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 03:19:00.665481    5824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:19:00.668153    5824 out.go:97] Using the qemu2 driver based on user configuration
	I0520 03:19:00.668174    5824 start.go:297] selected driver: qemu2
	I0520 03:19:00.668189    5824 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:19:00.668244    5824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:19:00.671161    5824 out.go:169] Automatically selected the socket_vmnet network
	I0520 03:19:00.676802    5824 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 03:19:00.676903    5824 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:19:00.676935    5824 cni.go:84] Creating CNI manager for ""
	I0520 03:19:00.676955    5824 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 03:19:00.677023    5824 start.go:340] cluster config:
	{Name:download-only-699000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:19:00.682497    5824 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:19:00.687158    5824 out.go:97] Downloading VM boot image ...
	I0520 03:19:00.687174    5824 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso
	I0520 03:19:05.309957    5824 out.go:97] Starting "download-only-699000" primary control-plane node in "download-only-699000" cluster
	I0520 03:19:05.309988    5824 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:19:05.365830    5824 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 03:19:05.365838    5824 cache.go:56] Caching tarball of preloaded images
	I0520 03:19:05.365975    5824 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:19:05.370020    5824 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 03:19:05.370027    5824 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:05.446701    5824 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 03:19:10.772670    5824 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:10.772842    5824 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:11.470207    5824 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 03:19:11.470415    5824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/download-only-699000/config.json ...
	I0520 03:19:11.470431    5824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/download-only-699000/config.json: {Name:mk263d35c0fcf02cbcc8f112bd2baeb0331f01ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:19:11.470659    5824 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:19:11.470844    5824 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0520 03:19:11.940605    5824 out.go:169] 
	W0520 03:19:11.947616    5824 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380] Decompressors:map[bz2:0x140009152d0 gz:0x140009152d8 tar:0x14000915280 tar.bz2:0x14000915290 tar.gz:0x140009152a0 tar.xz:0x140009152b0 tar.zst:0x140009152c0 tbz2:0x14000915290 tgz:0x140009152a0 txz:0x140009152b0 tzst:0x140009152c0 xz:0x140009152e0 zip:0x140009152f0 zst:0x140009152e8] Getters:map[file:0x140007dab50 http:0x1400089a190 https:0x1400089a1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 03:19:11.947647    5824 out_reason.go:110] 
	W0520 03:19:11.953592    5824 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:19:11.957372    5824 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-699000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-196000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-196000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.778790209s)

                                                
                                                
-- stdout --
	* [offline-docker-196000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-196000" primary control-plane node in "offline-docker-196000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-196000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:30:28.936155    7372 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:30:28.936305    7372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:28.936308    7372 out.go:304] Setting ErrFile to fd 2...
	I0520 03:30:28.936311    7372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:28.936456    7372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:30:28.937727    7372 out.go:298] Setting JSON to false
	I0520 03:30:28.955441    7372 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5399,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:30:28.955522    7372 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:30:28.960216    7372 out.go:177] * [offline-docker-196000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:30:28.967118    7372 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:30:28.967147    7372 notify.go:220] Checking for updates...
	I0520 03:30:28.975031    7372 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:30:28.978033    7372 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:30:28.981036    7372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:30:28.983949    7372 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:30:28.987019    7372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:30:28.990388    7372 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:30:28.990450    7372 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:30:28.992984    7372 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:30:29.000052    7372 start.go:297] selected driver: qemu2
	I0520 03:30:29.000061    7372 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:30:29.000068    7372 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:30:29.002042    7372 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:30:29.003275    7372 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:30:29.006078    7372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:30:29.006093    7372 cni.go:84] Creating CNI manager for ""
	I0520 03:30:29.006100    7372 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:30:29.006103    7372 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:30:29.006135    7372 start.go:340] cluster config:
	{Name:offline-docker-196000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:30:29.010558    7372 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:30:29.017982    7372 out.go:177] * Starting "offline-docker-196000" primary control-plane node in "offline-docker-196000" cluster
	I0520 03:30:29.022003    7372 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:30:29.022038    7372 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:30:29.022049    7372 cache.go:56] Caching tarball of preloaded images
	I0520 03:30:29.022135    7372 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:30:29.022141    7372 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:30:29.022234    7372 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/offline-docker-196000/config.json ...
	I0520 03:30:29.022245    7372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/offline-docker-196000/config.json: {Name:mkd3e6c419a45e467defa31f9794013ffc403998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:30:29.022458    7372 start.go:360] acquireMachinesLock for offline-docker-196000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:29.022493    7372 start.go:364] duration metric: took 25.917µs to acquireMachinesLock for "offline-docker-196000"
	I0520 03:30:29.022507    7372 start.go:93] Provisioning new machine with config: &{Name:offline-docker-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:29.022546    7372 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:29.031016    7372 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:29.046634    7372 start.go:159] libmachine.API.Create for "offline-docker-196000" (driver="qemu2")
	I0520 03:30:29.046666    7372 client.go:168] LocalClient.Create starting
	I0520 03:30:29.046753    7372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:29.046790    7372 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:29.046798    7372 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:29.046843    7372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:29.046865    7372 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:29.046876    7372 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:29.047265    7372 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:29.178581    7372 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:29.256009    7372 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:29.256022    7372 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:29.256270    7372 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2
	I0520 03:30:29.278384    7372 main.go:141] libmachine: STDOUT: 
	I0520 03:30:29.278405    7372 main.go:141] libmachine: STDERR: 
	I0520 03:30:29.278469    7372 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2 +20000M
	I0520 03:30:29.290360    7372 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:29.290382    7372 main.go:141] libmachine: STDERR: 
	I0520 03:30:29.290404    7372 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2
	I0520 03:30:29.290407    7372 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:29.290445    7372 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:21:e1:2c:51:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2
	I0520 03:30:29.292397    7372 main.go:141] libmachine: STDOUT: 
	I0520 03:30:29.292416    7372 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:29.292435    7372 client.go:171] duration metric: took 245.769125ms to LocalClient.Create
	I0520 03:30:31.294524    7372 start.go:128] duration metric: took 2.272000208s to createHost
	I0520 03:30:31.294559    7372 start.go:83] releasing machines lock for "offline-docker-196000", held for 2.272103792s
	W0520 03:30:31.294588    7372 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:31.305163    7372 out.go:177] * Deleting "offline-docker-196000" in qemu2 ...
	W0520 03:30:31.314394    7372 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:31.314406    7372 start.go:728] Will try again in 5 seconds ...
	I0520 03:30:36.316543    7372 start.go:360] acquireMachinesLock for offline-docker-196000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:36.316975    7372 start.go:364] duration metric: took 346.75µs to acquireMachinesLock for "offline-docker-196000"
	I0520 03:30:36.317100    7372 start.go:93] Provisioning new machine with config: &{Name:offline-docker-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:36.317373    7372 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:36.332027    7372 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:36.382352    7372 start.go:159] libmachine.API.Create for "offline-docker-196000" (driver="qemu2")
	I0520 03:30:36.382410    7372 client.go:168] LocalClient.Create starting
	I0520 03:30:36.382533    7372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:36.382592    7372 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:36.382611    7372 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:36.382685    7372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:36.382736    7372 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:36.382748    7372 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:36.383243    7372 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:36.519973    7372 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:36.621639    7372 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:36.621645    7372 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:36.621805    7372 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2
	I0520 03:30:36.634162    7372 main.go:141] libmachine: STDOUT: 
	I0520 03:30:36.634181    7372 main.go:141] libmachine: STDERR: 
	I0520 03:30:36.634229    7372 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2 +20000M
	I0520 03:30:36.644838    7372 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:36.644854    7372 main.go:141] libmachine: STDERR: 
	I0520 03:30:36.644867    7372 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2
	I0520 03:30:36.644872    7372 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:36.644901    7372 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:60:00:b3:f2:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/offline-docker-196000/disk.qcow2
	I0520 03:30:36.646456    7372 main.go:141] libmachine: STDOUT: 
	I0520 03:30:36.646471    7372 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:36.646483    7372 client.go:171] duration metric: took 264.072208ms to LocalClient.Create
	I0520 03:30:38.648518    7372 start.go:128] duration metric: took 2.331166167s to createHost
	I0520 03:30:38.648535    7372 start.go:83] releasing machines lock for "offline-docker-196000", held for 2.33158975s
	W0520 03:30:38.648637    7372 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-196000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:38.657735    7372 out.go:177] 
	W0520 03:30:38.661847    7372 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:30:38.661855    7372 out.go:239] * 
	* 
	W0520 03:30:38.662327    7372 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:30:38.675879    7372 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-196000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-05-20 03:30:38.686384 -0700 PDT m=+698.203306585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-196000 -n offline-docker-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-196000 -n offline-docker-196000: exit status 7 (33.854375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-196000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-196000
--- FAIL: TestOffline (9.91s)

                                                
                                    
x
+
TestAddons/Setup (10.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-091000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-091000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.027513083s)

                                                
                                                
-- stdout --
	* [addons-091000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-091000" primary control-plane node in "addons-091000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-091000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:19:24.067827    5938 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:19:24.067986    5938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:24.067989    5938 out.go:304] Setting ErrFile to fd 2...
	I0520 03:19:24.067992    5938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:24.068107    5938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:19:24.069158    5938 out.go:298] Setting JSON to false
	I0520 03:19:24.085039    5938 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4735,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:19:24.085096    5938 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:19:24.089955    5938 out.go:177] * [addons-091000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:19:24.096944    5938 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:19:24.100995    5938 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:19:24.097028    5938 notify.go:220] Checking for updates...
	I0520 03:19:24.103941    5938 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:19:24.106966    5938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:19:24.114219    5938 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:19:24.116918    5938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:19:24.120144    5938 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:19:24.123905    5938 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:19:24.130953    5938 start.go:297] selected driver: qemu2
	I0520 03:19:24.130961    5938 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:19:24.130968    5938 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:19:24.133278    5938 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:19:24.136058    5938 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:19:24.139033    5938 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:19:24.139051    5938 cni.go:84] Creating CNI manager for ""
	I0520 03:19:24.139058    5938 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:19:24.139065    5938 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:19:24.139098    5938 start.go:340] cluster config:
	{Name:addons-091000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-091000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:19:24.143503    5938 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:19:24.152026    5938 out.go:177] * Starting "addons-091000" primary control-plane node in "addons-091000" cluster
	I0520 03:19:24.155976    5938 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:19:24.156008    5938 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:19:24.156028    5938 cache.go:56] Caching tarball of preloaded images
	I0520 03:19:24.156084    5938 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:19:24.156096    5938 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:19:24.156333    5938 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/addons-091000/config.json ...
	I0520 03:19:24.156344    5938 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/addons-091000/config.json: {Name:mkd9279edee801fddf5a3dbba65c8fae8967c37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:19:24.156720    5938 start.go:360] acquireMachinesLock for addons-091000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:19:24.156779    5938 start.go:364] duration metric: took 54.125µs to acquireMachinesLock for "addons-091000"
	I0520 03:19:24.156791    5938 start.go:93] Provisioning new machine with config: &{Name:addons-091000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-091000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:19:24.156822    5938 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:19:24.165932    5938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 03:19:24.184122    5938 start.go:159] libmachine.API.Create for "addons-091000" (driver="qemu2")
	I0520 03:19:24.184153    5938 client.go:168] LocalClient.Create starting
	I0520 03:19:24.184280    5938 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:19:24.288838    5938 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:19:24.482847    5938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:19:24.609733    5938 main.go:141] libmachine: Creating SSH key...
	I0520 03:19:24.684593    5938 main.go:141] libmachine: Creating Disk image...
	I0520 03:19:24.684602    5938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:19:24.684805    5938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2
	I0520 03:19:24.697355    5938 main.go:141] libmachine: STDOUT: 
	I0520 03:19:24.697383    5938 main.go:141] libmachine: STDERR: 
	I0520 03:19:24.697468    5938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2 +20000M
	I0520 03:19:24.708539    5938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:19:24.708551    5938 main.go:141] libmachine: STDERR: 
	I0520 03:19:24.708565    5938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2
	I0520 03:19:24.708568    5938 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:19:24.708622    5938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ca:07:d4:be:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2
	I0520 03:19:24.710319    5938 main.go:141] libmachine: STDOUT: 
	I0520 03:19:24.710333    5938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:19:24.710361    5938 client.go:171] duration metric: took 526.20925ms to LocalClient.Create
	I0520 03:19:26.712583    5938 start.go:128] duration metric: took 2.5557335s to createHost
	I0520 03:19:26.712655    5938 start.go:83] releasing machines lock for "addons-091000", held for 2.555913208s
	W0520 03:19:26.712711    5938 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:19:26.725180    5938 out.go:177] * Deleting "addons-091000" in qemu2 ...
	W0520 03:19:26.748964    5938 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:19:26.748998    5938 start.go:728] Will try again in 5 seconds ...
	I0520 03:19:31.751144    5938 start.go:360] acquireMachinesLock for addons-091000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:19:31.751664    5938 start.go:364] duration metric: took 424.375µs to acquireMachinesLock for "addons-091000"
	I0520 03:19:31.751824    5938 start.go:93] Provisioning new machine with config: &{Name:addons-091000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-091000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:19:31.752122    5938 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:19:31.762696    5938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 03:19:31.812947    5938 start.go:159] libmachine.API.Create for "addons-091000" (driver="qemu2")
	I0520 03:19:31.813006    5938 client.go:168] LocalClient.Create starting
	I0520 03:19:31.813132    5938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:19:31.813200    5938 main.go:141] libmachine: Decoding PEM data...
	I0520 03:19:31.813215    5938 main.go:141] libmachine: Parsing certificate...
	I0520 03:19:31.813316    5938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:19:31.813363    5938 main.go:141] libmachine: Decoding PEM data...
	I0520 03:19:31.813380    5938 main.go:141] libmachine: Parsing certificate...
	I0520 03:19:31.813997    5938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:19:31.952373    5938 main.go:141] libmachine: Creating SSH key...
	I0520 03:19:32.001923    5938 main.go:141] libmachine: Creating Disk image...
	I0520 03:19:32.001928    5938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:19:32.002106    5938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2
	I0520 03:19:32.014766    5938 main.go:141] libmachine: STDOUT: 
	I0520 03:19:32.014786    5938 main.go:141] libmachine: STDERR: 
	I0520 03:19:32.014845    5938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2 +20000M
	I0520 03:19:32.025784    5938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:19:32.025815    5938 main.go:141] libmachine: STDERR: 
	I0520 03:19:32.025824    5938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2
	I0520 03:19:32.025830    5938 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:19:32.025879    5938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:81:cf:1e:0f:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/addons-091000/disk.qcow2
	I0520 03:19:32.027554    5938 main.go:141] libmachine: STDOUT: 
	I0520 03:19:32.027578    5938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:19:32.027592    5938 client.go:171] duration metric: took 214.584292ms to LocalClient.Create
	I0520 03:19:34.029844    5938 start.go:128] duration metric: took 2.277712084s to createHost
	I0520 03:19:34.029945    5938 start.go:83] releasing machines lock for "addons-091000", held for 2.278296291s
	W0520 03:19:34.030293    5938 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-091000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-091000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:19:34.038155    5938 out.go:177] 
	W0520 03:19:34.043008    5938 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:19:34.043039    5938 out.go:239] * 
	* 
	W0520 03:19:34.045621    5938 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:19:34.053885    5938 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-091000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.03s)

                                                
                                    
x
+
TestCertOptions (10.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-840000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-840000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.890710167s)

                                                
                                                
-- stdout --
	* [cert-options-840000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-840000" primary control-plane node in "cert-options-840000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-840000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-840000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-840000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-840000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-840000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.046417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-840000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-840000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-840000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-840000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-840000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-840000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.06975ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-840000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-840000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-840000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-840000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-840000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-05-20 03:31:09.029793 -0700 PDT m=+728.547280126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-840000 -n cert-options-840000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-840000 -n cert-options-840000: exit status 7 (29.282625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-840000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-840000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-840000
--- FAIL: TestCertOptions (10.17s)

                                                
                                    
x
+
TestCertExpiration (195.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-708000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-708000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.832694792s)

                                                
                                                
-- stdout --
	* [cert-expiration-708000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-708000" primary control-plane node in "cert-expiration-708000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-708000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-708000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-708000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-708000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.22157275s)

                                                
                                                
-- stdout --
	* [cert-expiration-708000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-708000" primary control-plane node in "cert-expiration-708000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-708000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-708000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-708000" primary control-plane node in "cert-expiration-708000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-20 03:34:08.95396 -0700 PDT m=+908.474793293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-708000 -n cert-expiration-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-708000 -n cert-expiration-708000: exit status 7 (46.878334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-708000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-708000
--- FAIL: TestCertExpiration (195.21s)

                                                
                                    
x
+
TestDockerFlags (10.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-521000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-521000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.89437875s)

                                                
                                                
-- stdout --
	* [docker-flags-521000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-521000" primary control-plane node in "docker-flags-521000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-521000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:30:48.872594    7569 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:30:48.872743    7569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:48.872746    7569 out.go:304] Setting ErrFile to fd 2...
	I0520 03:30:48.872749    7569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:48.872866    7569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:30:48.873917    7569 out.go:298] Setting JSON to false
	I0520 03:30:48.890077    7569 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5419,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:30:48.890140    7569 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:30:48.895694    7569 out.go:177] * [docker-flags-521000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:30:48.903667    7569 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:30:48.903726    7569 notify.go:220] Checking for updates...
	I0520 03:30:48.907674    7569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:30:48.910747    7569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:30:48.913627    7569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:30:48.920669    7569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:30:48.927541    7569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:30:48.931963    7569 config.go:182] Loaded profile config "force-systemd-flag-897000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:30:48.932029    7569 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:30:48.932084    7569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:30:48.936703    7569 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:30:48.944705    7569 start.go:297] selected driver: qemu2
	I0520 03:30:48.944710    7569 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:30:48.944718    7569 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:30:48.947058    7569 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:30:48.950539    7569 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:30:48.953806    7569 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0520 03:30:48.953830    7569 cni.go:84] Creating CNI manager for ""
	I0520 03:30:48.953838    7569 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:30:48.953842    7569 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:30:48.953883    7569 start.go:340] cluster config:
	{Name:docker-flags-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:30:48.958502    7569 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:30:48.966643    7569 out.go:177] * Starting "docker-flags-521000" primary control-plane node in "docker-flags-521000" cluster
	I0520 03:30:48.970670    7569 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:30:48.970687    7569 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:30:48.970701    7569 cache.go:56] Caching tarball of preloaded images
	I0520 03:30:48.970768    7569 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:30:48.970774    7569 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:30:48.970858    7569 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/docker-flags-521000/config.json ...
	I0520 03:30:48.970870    7569 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/docker-flags-521000/config.json: {Name:mkdaeae0af6d55ca3d6633b8cb8a16f09fcacfed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:30:48.971105    7569 start.go:360] acquireMachinesLock for docker-flags-521000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:48.971143    7569 start.go:364] duration metric: took 30µs to acquireMachinesLock for "docker-flags-521000"
	I0520 03:30:48.971155    7569 start.go:93] Provisioning new machine with config: &{Name:docker-flags-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:48.971194    7569 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:48.979645    7569 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:48.997757    7569 start.go:159] libmachine.API.Create for "docker-flags-521000" (driver="qemu2")
	I0520 03:30:48.997786    7569 client.go:168] LocalClient.Create starting
	I0520 03:30:48.997877    7569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:48.997907    7569 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:48.997922    7569 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:48.997961    7569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:48.997985    7569 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:48.997993    7569 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:48.998369    7569 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:49.127720    7569 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:49.270384    7569 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:49.270395    7569 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:49.270606    7569 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2
	I0520 03:30:49.283526    7569 main.go:141] libmachine: STDOUT: 
	I0520 03:30:49.283552    7569 main.go:141] libmachine: STDERR: 
	I0520 03:30:49.283607    7569 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2 +20000M
	I0520 03:30:49.294685    7569 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:49.294698    7569 main.go:141] libmachine: STDERR: 
	I0520 03:30:49.294719    7569 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2
	I0520 03:30:49.294722    7569 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:49.294747    7569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:d4:42:f9:7c:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2
	I0520 03:30:49.296398    7569 main.go:141] libmachine: STDOUT: 
	I0520 03:30:49.296413    7569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:49.296433    7569 client.go:171] duration metric: took 298.647667ms to LocalClient.Create
	I0520 03:30:51.296750    7569 start.go:128] duration metric: took 2.325580083s to createHost
	I0520 03:30:51.296815    7569 start.go:83] releasing machines lock for "docker-flags-521000", held for 2.325707083s
	W0520 03:30:51.296960    7569 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:51.321036    7569 out.go:177] * Deleting "docker-flags-521000" in qemu2 ...
	W0520 03:30:51.339211    7569 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:51.339234    7569 start.go:728] Will try again in 5 seconds ...
	I0520 03:30:56.341384    7569 start.go:360] acquireMachinesLock for docker-flags-521000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:56.341839    7569 start.go:364] duration metric: took 348.459µs to acquireMachinesLock for "docker-flags-521000"
	I0520 03:30:56.341937    7569 start.go:93] Provisioning new machine with config: &{Name:docker-flags-521000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:56.342198    7569 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:56.351186    7569 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:56.400303    7569 start.go:159] libmachine.API.Create for "docker-flags-521000" (driver="qemu2")
	I0520 03:30:56.400358    7569 client.go:168] LocalClient.Create starting
	I0520 03:30:56.400477    7569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:56.400544    7569 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:56.400558    7569 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:56.400627    7569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:56.400669    7569 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:56.400679    7569 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:56.401251    7569 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:56.539773    7569 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:56.671142    7569 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:56.671148    7569 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:56.671323    7569 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2
	I0520 03:30:56.684004    7569 main.go:141] libmachine: STDOUT: 
	I0520 03:30:56.684026    7569 main.go:141] libmachine: STDERR: 
	I0520 03:30:56.684081    7569 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2 +20000M
	I0520 03:30:56.695004    7569 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:56.695021    7569 main.go:141] libmachine: STDERR: 
	I0520 03:30:56.695031    7569 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2
	I0520 03:30:56.695036    7569 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:56.695090    7569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:50:95:80:cd:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/docker-flags-521000/disk.qcow2
	I0520 03:30:56.696756    7569 main.go:141] libmachine: STDOUT: 
	I0520 03:30:56.696773    7569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:56.696784    7569 client.go:171] duration metric: took 296.427833ms to LocalClient.Create
	I0520 03:30:58.698897    7569 start.go:128] duration metric: took 2.356681208s to createHost
	I0520 03:30:58.698945    7569 start.go:83] releasing machines lock for "docker-flags-521000", held for 2.357124667s
	W0520 03:30:58.699271    7569 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-521000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:58.708762    7569 out.go:177] 
	W0520 03:30:58.713859    7569 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:30:58.713875    7569 out.go:239] * 
	* 
	W0520 03:30:58.715327    7569 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:30:58.726718    7569 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-521000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-521000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-521000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (74.954ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-521000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-521000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-521000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-521000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-521000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-521000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-521000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-521000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-521000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.60575ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-521000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-521000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-521000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-521000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-521000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-521000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-05-20 03:30:58.862716 -0700 PDT m=+718.380014085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-521000 -n docker-flags-521000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-521000 -n docker-flags-521000: exit status 7 (28.431625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-521000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-521000
--- FAIL: TestDockerFlags (10.14s)

                                                
                                    
x
+
TestForceSystemdFlag (10.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-897000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-897000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.46184375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-897000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-897000" primary control-plane node in "force-systemd-flag-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:30:43.250096    7547 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:30:43.250236    7547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:43.250239    7547 out.go:304] Setting ErrFile to fd 2...
	I0520 03:30:43.250242    7547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:43.250356    7547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:30:43.251435    7547 out.go:298] Setting JSON to false
	I0520 03:30:43.267219    7547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5414,"bootTime":1716195629,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:30:43.267287    7547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:30:43.273399    7547 out.go:177] * [force-systemd-flag-897000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:30:43.279381    7547 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:30:43.279428    7547 notify.go:220] Checking for updates...
	I0520 03:30:43.287315    7547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:30:43.291372    7547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:30:43.294331    7547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:30:43.297325    7547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:30:43.300338    7547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:30:43.303655    7547 config.go:182] Loaded profile config "force-systemd-env-703000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:30:43.303727    7547 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:30:43.303775    7547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:30:43.308301    7547 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:30:43.315293    7547 start.go:297] selected driver: qemu2
	I0520 03:30:43.315299    7547 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:30:43.315305    7547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:30:43.317502    7547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:30:43.320291    7547 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:30:43.323500    7547 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:30:43.323514    7547 cni.go:84] Creating CNI manager for ""
	I0520 03:30:43.323520    7547 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:30:43.323524    7547 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:30:43.323551    7547 start.go:340] cluster config:
	{Name:force-systemd-flag-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:30:43.328036    7547 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:30:43.336328    7547 out.go:177] * Starting "force-systemd-flag-897000" primary control-plane node in "force-systemd-flag-897000" cluster
	I0520 03:30:43.340310    7547 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:30:43.340333    7547 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:30:43.340345    7547 cache.go:56] Caching tarball of preloaded images
	I0520 03:30:43.340401    7547 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:30:43.340406    7547 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:30:43.340459    7547 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/force-systemd-flag-897000/config.json ...
	I0520 03:30:43.340470    7547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/force-systemd-flag-897000/config.json: {Name:mkc826f9bb14d5936e92bf5820a59b65684d440c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:30:43.340733    7547 start.go:360] acquireMachinesLock for force-systemd-flag-897000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:43.340773    7547 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "force-systemd-flag-897000"
	I0520 03:30:43.340786    7547 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:43.340820    7547 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:43.348296    7547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:43.366098    7547 start.go:159] libmachine.API.Create for "force-systemd-flag-897000" (driver="qemu2")
	I0520 03:30:43.366135    7547 client.go:168] LocalClient.Create starting
	I0520 03:30:43.366203    7547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:43.366241    7547 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:43.366249    7547 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:43.366290    7547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:43.366314    7547 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:43.366323    7547 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:43.366775    7547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:43.494867    7547 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:43.684422    7547 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:43.684428    7547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:43.684615    7547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2
	I0520 03:30:43.697570    7547 main.go:141] libmachine: STDOUT: 
	I0520 03:30:43.697591    7547 main.go:141] libmachine: STDERR: 
	I0520 03:30:43.697658    7547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2 +20000M
	I0520 03:30:43.708483    7547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:43.708504    7547 main.go:141] libmachine: STDERR: 
	I0520 03:30:43.708539    7547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2
	I0520 03:30:43.708548    7547 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:43.708580    7547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a5:8f:4b:97:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2
	I0520 03:30:43.710271    7547 main.go:141] libmachine: STDOUT: 
	I0520 03:30:43.710287    7547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:43.710308    7547 client.go:171] duration metric: took 344.17475ms to LocalClient.Create
	I0520 03:30:45.712486    7547 start.go:128] duration metric: took 2.371671667s to createHost
	I0520 03:30:45.712553    7547 start.go:83] releasing machines lock for "force-systemd-flag-897000", held for 2.371814958s
	W0520 03:30:45.712627    7547 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:45.719114    7547 out.go:177] * Deleting "force-systemd-flag-897000" in qemu2 ...
	W0520 03:30:45.743771    7547 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:45.743800    7547 start.go:728] Will try again in 5 seconds ...
	I0520 03:30:50.745923    7547 start.go:360] acquireMachinesLock for force-systemd-flag-897000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:51.297000    7547 start.go:364] duration metric: took 550.945542ms to acquireMachinesLock for "force-systemd-flag-897000"
	I0520 03:30:51.297237    7547 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:51.297477    7547 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:51.310960    7547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:51.359910    7547 start.go:159] libmachine.API.Create for "force-systemd-flag-897000" (driver="qemu2")
	I0520 03:30:51.359965    7547 client.go:168] LocalClient.Create starting
	I0520 03:30:51.360120    7547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:51.360184    7547 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:51.360208    7547 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:51.360268    7547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:51.360313    7547 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:51.360331    7547 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:51.360939    7547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:51.511362    7547 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:51.612270    7547 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:51.612275    7547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:51.612474    7547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2
	I0520 03:30:51.625151    7547 main.go:141] libmachine: STDOUT: 
	I0520 03:30:51.625172    7547 main.go:141] libmachine: STDERR: 
	I0520 03:30:51.625235    7547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2 +20000M
	I0520 03:30:51.635976    7547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:51.635994    7547 main.go:141] libmachine: STDERR: 
	I0520 03:30:51.636009    7547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2
	I0520 03:30:51.636013    7547 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:51.636046    7547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:db:2c:4b:11:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-flag-897000/disk.qcow2
	I0520 03:30:51.637760    7547 main.go:141] libmachine: STDOUT: 
	I0520 03:30:51.637778    7547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:51.637792    7547 client.go:171] duration metric: took 277.827ms to LocalClient.Create
	I0520 03:30:53.640057    7547 start.go:128] duration metric: took 2.3425845s to createHost
	I0520 03:30:53.640141    7547 start.go:83] releasing machines lock for "force-systemd-flag-897000", held for 2.343134208s
	W0520 03:30:53.640439    7547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:53.654914    7547 out.go:177] 
	W0520 03:30:53.659126    7547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:30:53.659162    7547 out.go:239] * 
	* 
	W0520 03:30:53.661903    7547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:30:53.670850    7547 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-897000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-897000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-897000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.455083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-897000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-897000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-897000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-05-20 03:30:53.769093 -0700 PDT m=+713.286295626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-897000 -n force-systemd-flag-897000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-897000 -n force-systemd-flag-897000: exit status 7 (33.464542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-897000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-897000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-897000
--- FAIL: TestForceSystemdFlag (10.67s)

                                                
                                    
x
+
TestForceSystemdEnv (10.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-703000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-703000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.822561458s)

                                                
                                                
-- stdout --
	* [force-systemd-env-703000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-703000" primary control-plane node in "force-systemd-env-703000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-703000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:30:38.846155    7524 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:30:38.846323    7524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:38.846327    7524 out.go:304] Setting ErrFile to fd 2...
	I0520 03:30:38.846329    7524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:30:38.846462    7524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:30:38.847481    7524 out.go:298] Setting JSON to false
	I0520 03:30:38.863941    7524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5409,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:30:38.864011    7524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:30:38.868923    7524 out.go:177] * [force-systemd-env-703000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:30:38.875873    7524 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:30:38.879873    7524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:30:38.875950    7524 notify.go:220] Checking for updates...
	I0520 03:30:38.885879    7524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:30:38.888878    7524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:30:38.891809    7524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:30:38.894832    7524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0520 03:30:38.898247    7524 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:30:38.898298    7524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:30:38.902814    7524 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:30:38.909867    7524 start.go:297] selected driver: qemu2
	I0520 03:30:38.909873    7524 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:30:38.909878    7524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:30:38.911985    7524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:30:38.914800    7524 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:30:38.917916    7524 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:30:38.917928    7524 cni.go:84] Creating CNI manager for ""
	I0520 03:30:38.917934    7524 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:30:38.917939    7524 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:30:38.917964    7524 start.go:340] cluster config:
	{Name:force-systemd-env-703000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:30:38.922286    7524 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:30:38.930836    7524 out.go:177] * Starting "force-systemd-env-703000" primary control-plane node in "force-systemd-env-703000" cluster
	I0520 03:30:38.934872    7524 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:30:38.934886    7524 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:30:38.934895    7524 cache.go:56] Caching tarball of preloaded images
	I0520 03:30:38.934941    7524 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:30:38.934946    7524 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:30:38.934993    7524 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/force-systemd-env-703000/config.json ...
	I0520 03:30:38.935004    7524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/force-systemd-env-703000/config.json: {Name:mk7e1561f6a7fc29b8876f5e2043447d6f4efce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:30:38.935209    7524 start.go:360] acquireMachinesLock for force-systemd-env-703000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:38.935242    7524 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "force-systemd-env-703000"
	I0520 03:30:38.935253    7524 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:38.935283    7524 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:38.943843    7524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:38.958909    7524 start.go:159] libmachine.API.Create for "force-systemd-env-703000" (driver="qemu2")
	I0520 03:30:38.958933    7524 client.go:168] LocalClient.Create starting
	I0520 03:30:38.958993    7524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:38.959023    7524 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:38.959032    7524 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:38.959069    7524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:38.959090    7524 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:38.959101    7524 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:38.959467    7524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:39.085930    7524 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:39.236540    7524 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:39.236548    7524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:39.236764    7524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2
	I0520 03:30:39.249994    7524 main.go:141] libmachine: STDOUT: 
	I0520 03:30:39.250024    7524 main.go:141] libmachine: STDERR: 
	I0520 03:30:39.250092    7524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2 +20000M
	I0520 03:30:39.261455    7524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:39.261480    7524 main.go:141] libmachine: STDERR: 
	I0520 03:30:39.261494    7524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2
	I0520 03:30:39.261499    7524 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:39.261532    7524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:4b:49:66:93:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2
	I0520 03:30:39.263314    7524 main.go:141] libmachine: STDOUT: 
	I0520 03:30:39.263334    7524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:39.263354    7524 client.go:171] duration metric: took 304.422709ms to LocalClient.Create
	I0520 03:30:41.265535    7524 start.go:128] duration metric: took 2.330264584s to createHost
	I0520 03:30:41.265604    7524 start.go:83] releasing machines lock for "force-systemd-env-703000", held for 2.330396792s
	W0520 03:30:41.265679    7524 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:41.273059    7524 out.go:177] * Deleting "force-systemd-env-703000" in qemu2 ...
	W0520 03:30:41.297084    7524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:41.297123    7524 start.go:728] Will try again in 5 seconds ...
	I0520 03:30:46.298964    7524 start.go:360] acquireMachinesLock for force-systemd-env-703000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:46.299441    7524 start.go:364] duration metric: took 409.708µs to acquireMachinesLock for "force-systemd-env-703000"
	I0520 03:30:46.299567    7524 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-703000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-703000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:46.299829    7524 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:46.309290    7524 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0520 03:30:46.359055    7524 start.go:159] libmachine.API.Create for "force-systemd-env-703000" (driver="qemu2")
	I0520 03:30:46.359106    7524 client.go:168] LocalClient.Create starting
	I0520 03:30:46.359224    7524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:46.359292    7524 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:46.359308    7524 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:46.359365    7524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:46.359413    7524 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:46.359422    7524 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:46.360443    7524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:46.504080    7524 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:46.569812    7524 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:46.569818    7524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:46.569992    7524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2
	I0520 03:30:46.582377    7524 main.go:141] libmachine: STDOUT: 
	I0520 03:30:46.582397    7524 main.go:141] libmachine: STDERR: 
	I0520 03:30:46.582468    7524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2 +20000M
	I0520 03:30:46.593590    7524 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:46.593606    7524 main.go:141] libmachine: STDERR: 
	I0520 03:30:46.593617    7524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2
	I0520 03:30:46.593622    7524 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:46.593667    7524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:7c:64:4b:7d:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/force-systemd-env-703000/disk.qcow2
	I0520 03:30:46.595423    7524 main.go:141] libmachine: STDOUT: 
	I0520 03:30:46.595439    7524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:46.595451    7524 client.go:171] duration metric: took 236.3455ms to LocalClient.Create
	I0520 03:30:48.597728    7524 start.go:128] duration metric: took 2.297887084s to createHost
	I0520 03:30:48.597797    7524 start.go:83] releasing machines lock for "force-systemd-env-703000", held for 2.298373875s
	W0520 03:30:48.598140    7524 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-703000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:48.608666    7524 out.go:177] 
	W0520 03:30:48.612771    7524 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:30:48.612803    7524 out.go:239] * 
	* 
	W0520 03:30:48.615214    7524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:30:48.624736    7524 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-703000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-703000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-703000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.382542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-703000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-703000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-703000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-05-20 03:30:48.717819 -0700 PDT m=+708.234927918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-703000 -n force-systemd-env-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-703000 -n force-systemd-env-703000: exit status 7 (32.775875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-703000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-703000
--- FAIL: TestForceSystemdEnv (10.03s)

                                                
                                    
x
+
TestErrorSpam/setup (9.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-498000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-498000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 --driver=qemu2 : exit status 80 (9.863070208s)

                                                
                                                
-- stdout --
	* [nospam-498000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-498000" primary control-plane node in "nospam-498000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-498000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-498000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-498000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-498000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18925
- KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-498000" primary control-plane node in "nospam-498000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-498000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.87s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-357000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-357000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.853656583s)

                                                
                                                
-- stdout --
	* [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-357000" primary control-plane node in "functional-357000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-357000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50875 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50875 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50875 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-357000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18925
- KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-357000" primary control-plane node in "functional-357000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-357000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50875 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50875 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50875 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (67.969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-357000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-357000 --alsologtostderr -v=8: exit status 80 (5.185465375s)

                                                
                                                
-- stdout --
	* [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-357000" primary control-plane node in "functional-357000" cluster
	* Restarting existing qemu2 VM for "functional-357000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-357000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:20:01.882041    6073 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:20:01.882200    6073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:20:01.882203    6073 out.go:304] Setting ErrFile to fd 2...
	I0520 03:20:01.882206    6073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:20:01.882341    6073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:20:01.883343    6073 out.go:298] Setting JSON to false
	I0520 03:20:01.899530    6073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4772,"bootTime":1716195629,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:20:01.899598    6073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:20:01.905036    6073 out.go:177] * [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:20:01.912312    6073 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:20:01.916042    6073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:20:01.912358    6073 notify.go:220] Checking for updates...
	I0520 03:20:01.922165    6073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:20:01.925205    6073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:20:01.928262    6073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:20:01.931224    6073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:20:01.934502    6073 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:20:01.934554    6073 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:20:01.939152    6073 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:20:01.946216    6073 start.go:297] selected driver: qemu2
	I0520 03:20:01.946224    6073 start.go:901] validating driver "qemu2" against &{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:20:01.946278    6073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:20:01.948686    6073 cni.go:84] Creating CNI manager for ""
	I0520 03:20:01.948703    6073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:20:01.948745    6073 start.go:340] cluster config:
	{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:20:01.952991    6073 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:20:01.961167    6073 out.go:177] * Starting "functional-357000" primary control-plane node in "functional-357000" cluster
	I0520 03:20:01.964172    6073 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:20:01.964187    6073 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:20:01.964199    6073 cache.go:56] Caching tarball of preloaded images
	I0520 03:20:01.964250    6073 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:20:01.964255    6073 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:20:01.964308    6073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/functional-357000/config.json ...
	I0520 03:20:01.964622    6073 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:20:01.964648    6073 start.go:364] duration metric: took 21.209µs to acquireMachinesLock for "functional-357000"
	I0520 03:20:01.964658    6073 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:20:01.964666    6073 fix.go:54] fixHost starting: 
	I0520 03:20:01.964780    6073 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
	W0520 03:20:01.964788    6073 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:20:01.972188    6073 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
	I0520 03:20:01.976187    6073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
	I0520 03:20:01.978156    6073 main.go:141] libmachine: STDOUT: 
	I0520 03:20:01.978178    6073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:20:01.978205    6073 fix.go:56] duration metric: took 13.540666ms for fixHost
	I0520 03:20:01.978210    6073 start.go:83] releasing machines lock for "functional-357000", held for 13.557875ms
	W0520 03:20:01.978215    6073 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:20:01.978245    6073 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:20:01.978249    6073 start.go:728] Will try again in 5 seconds ...
	I0520 03:20:06.980295    6073 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:20:06.980688    6073 start.go:364] duration metric: took 301.292µs to acquireMachinesLock for "functional-357000"
	I0520 03:20:06.980985    6073 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:20:06.981003    6073 fix.go:54] fixHost starting: 
	I0520 03:20:06.981685    6073 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
	W0520 03:20:06.981711    6073 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:20:06.989173    6073 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
	I0520 03:20:06.993305    6073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
	I0520 03:20:07.001996    6073 main.go:141] libmachine: STDOUT: 
	I0520 03:20:07.002065    6073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:20:07.002125    6073 fix.go:56] duration metric: took 21.123458ms for fixHost
	I0520 03:20:07.002145    6073 start.go:83] releasing machines lock for "functional-357000", held for 21.434917ms
	W0520 03:20:07.002299    6073 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:20:07.009110    6073 out.go:177] 
	W0520 03:20:07.013157    6073 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:20:07.013203    6073 out.go:239] * 
	* 
	W0520 03:20:07.015754    6073 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:20:07.024097    6073 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-357000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.187259333s for "functional-357000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (67.944083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.795958ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-357000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (30.074833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-357000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-357000 get po -A: exit status 1 (26.035375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-357000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-357000\n"*: args "kubectl --context functional-357000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-357000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (29.286583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl images: exit status 83 (42.829416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (38.876875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-357000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (37.900167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.003875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-357000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 kubectl -- --context functional-357000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 kubectl -- --context functional-357000 get pods: exit status 1 (609.129334ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-357000
	* no server found for cluster "functional-357000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-357000 kubectl -- --context functional-357000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (31.003084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-357000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-357000 get pods: exit status 1 (921.81075ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-357000
	* no server found for cluster "functional-357000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-357000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (885.540917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.81s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-357000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-357000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.190409458s)

                                                
                                                
-- stdout --
	* [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-357000" primary control-plane node in "functional-357000" cluster
	* Restarting existing qemu2 VM for "functional-357000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-357000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-357000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.190980375s for "functional-357000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (67.285334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-357000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-357000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.067416ms)

                                                
                                                
** stderr ** 
	error: context "functional-357000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-357000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (29.160083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 logs: exit status 83 (71.635833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | -p download-only-699000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| delete  | -p download-only-699000                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | -p download-only-911000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| delete  | -p download-only-911000                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| delete  | -p download-only-699000                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| delete  | -p download-only-911000                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| start   | --download-only -p                                                       | binary-mirror-980000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | binary-mirror-980000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50846                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-980000                                                  | binary-mirror-980000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| addons  | disable dashboard -p                                                     | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | addons-091000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | addons-091000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-091000 --wait=true                                             | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-091000                                                         | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| start   | -p nospam-498000 -n=1 --memory=2250 --wait=false                         | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-498000                                                         | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-357000                              |                      |         |         |                     |                     |
	| cache   | functional-357000 cache delete                                           | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-357000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	| ssh     | functional-357000 ssh sudo                                               | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-357000                                                        | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-357000 ssh                                                    | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-357000 cache reload                                           | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	| ssh     | functional-357000 ssh                                                    | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-357000 kubectl --                                             | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | --context functional-357000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:20:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:20:12.919702    6152 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:20:12.919821    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:20:12.919822    6152 out.go:304] Setting ErrFile to fd 2...
	I0520 03:20:12.919824    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:20:12.919968    6152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:20:12.920989    6152 out.go:298] Setting JSON to false
	I0520 03:20:12.936890    6152 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4783,"bootTime":1716195629,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:20:12.936951    6152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:20:12.943707    6152 out.go:177] * [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:20:12.950663    6152 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:20:12.950711    6152 notify.go:220] Checking for updates...
	I0520 03:20:12.959582    6152 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:20:12.963534    6152 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:20:12.966637    6152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:20:12.969621    6152 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:20:12.972607    6152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:20:12.975983    6152 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:20:12.976038    6152 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:20:12.980483    6152 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:20:12.987561    6152 start.go:297] selected driver: qemu2
	I0520 03:20:12.987565    6152 start.go:901] validating driver "qemu2" against &{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:20:12.987614    6152 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:20:12.989910    6152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:20:12.989931    6152 cni.go:84] Creating CNI manager for ""
	I0520 03:20:12.989938    6152 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:20:12.989985    6152 start.go:340] cluster config:
	{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:20:12.994299    6152 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:20:13.002538    6152 out.go:177] * Starting "functional-357000" primary control-plane node in "functional-357000" cluster
	I0520 03:20:13.006439    6152 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:20:13.006459    6152 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:20:13.006468    6152 cache.go:56] Caching tarball of preloaded images
	I0520 03:20:13.006533    6152 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:20:13.006537    6152 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:20:13.006590    6152 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/functional-357000/config.json ...
	I0520 03:20:13.007016    6152 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:20:13.007051    6152 start.go:364] duration metric: took 30.334µs to acquireMachinesLock for "functional-357000"
	I0520 03:20:13.007060    6152 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:20:13.007067    6152 fix.go:54] fixHost starting: 
	I0520 03:20:13.007194    6152 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
	W0520 03:20:13.007201    6152 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:20:13.015576    6152 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
	I0520 03:20:13.019603    6152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
	I0520 03:20:13.021830    6152 main.go:141] libmachine: STDOUT: 
	I0520 03:20:13.021849    6152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:20:13.021881    6152 fix.go:56] duration metric: took 14.81525ms for fixHost
	I0520 03:20:13.021885    6152 start.go:83] releasing machines lock for "functional-357000", held for 14.831375ms
	W0520 03:20:13.021890    6152 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:20:13.021929    6152 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:20:13.021934    6152 start.go:728] Will try again in 5 seconds ...
	I0520 03:20:18.024057    6152 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:20:18.024537    6152 start.go:364] duration metric: took 367.833µs to acquireMachinesLock for "functional-357000"
	I0520 03:20:18.024689    6152 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:20:18.024704    6152 fix.go:54] fixHost starting: 
	I0520 03:20:18.025458    6152 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
	W0520 03:20:18.025477    6152 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:20:18.029094    6152 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
	I0520 03:20:18.033205    6152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
	I0520 03:20:18.042364    6152 main.go:141] libmachine: STDOUT: 
	I0520 03:20:18.042414    6152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:20:18.042509    6152 fix.go:56] duration metric: took 17.807833ms for fixHost
	I0520 03:20:18.042523    6152 start.go:83] releasing machines lock for "functional-357000", held for 17.971166ms
	W0520 03:20:18.042703    6152 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:20:18.049733    6152 out.go:177] 
	W0520 03:20:18.054029    6152 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:20:18.054051    6152 out.go:239] * 
	W0520 03:20:18.056529    6152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:20:18.063953    6152 out.go:177] 
	
	
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-357000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | -p download-only-699000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-699000                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | -p download-only-911000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-911000                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-699000                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-911000                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-980000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | binary-mirror-980000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50846                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-980000                                                  | binary-mirror-980000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| addons  | disable dashboard -p                                                     | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | addons-091000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | addons-091000                                                            |                      |         |         |                     |                     |
| start   | -p addons-091000 --wait=true                                             | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-091000                                                         | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | -p nospam-498000 -n=1 --memory=2250 --wait=false                         | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-498000                                                         | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | minikube-local-cache-test:functional-357000                              |                      |         |         |                     |                     |
| cache   | functional-357000 cache delete                                           | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | minikube-local-cache-test:functional-357000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
| ssh     | functional-357000 ssh sudo                                               | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-357000                                                        | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-357000 ssh                                                    | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-357000 cache reload                                           | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
| ssh     | functional-357000 ssh                                                    | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-357000 kubectl --                                             | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | --context functional-357000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/20 03:20:12
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0520 03:20:12.919702    6152 out.go:291] Setting OutFile to fd 1 ...
I0520 03:20:12.919821    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:20:12.919822    6152 out.go:304] Setting ErrFile to fd 2...
I0520 03:20:12.919824    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:20:12.919968    6152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:20:12.920989    6152 out.go:298] Setting JSON to false
I0520 03:20:12.936890    6152 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4783,"bootTime":1716195629,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0520 03:20:12.936951    6152 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0520 03:20:12.943707    6152 out.go:177] * [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0520 03:20:12.950663    6152 out.go:177]   - MINIKUBE_LOCATION=18925
I0520 03:20:12.950711    6152 notify.go:220] Checking for updates...
I0520 03:20:12.959582    6152 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
I0520 03:20:12.963534    6152 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0520 03:20:12.966637    6152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0520 03:20:12.969621    6152 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
I0520 03:20:12.972607    6152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0520 03:20:12.975983    6152 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:20:12.976038    6152 driver.go:392] Setting default libvirt URI to qemu:///system
I0520 03:20:12.980483    6152 out.go:177] * Using the qemu2 driver based on existing profile
I0520 03:20:12.987561    6152 start.go:297] selected driver: qemu2
I0520 03:20:12.987565    6152 start.go:901] validating driver "qemu2" against &{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 03:20:12.987614    6152 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0520 03:20:12.989910    6152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0520 03:20:12.989931    6152 cni.go:84] Creating CNI manager for ""
I0520 03:20:12.989938    6152 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0520 03:20:12.989985    6152 start.go:340] cluster config:
{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 03:20:12.994299    6152 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0520 03:20:13.002538    6152 out.go:177] * Starting "functional-357000" primary control-plane node in "functional-357000" cluster
I0520 03:20:13.006439    6152 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0520 03:20:13.006459    6152 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0520 03:20:13.006468    6152 cache.go:56] Caching tarball of preloaded images
I0520 03:20:13.006533    6152 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0520 03:20:13.006537    6152 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0520 03:20:13.006590    6152 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/functional-357000/config.json ...
I0520 03:20:13.007016    6152 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 03:20:13.007051    6152 start.go:364] duration metric: took 30.334µs to acquireMachinesLock for "functional-357000"
I0520 03:20:13.007060    6152 start.go:96] Skipping create...Using existing machine configuration
I0520 03:20:13.007067    6152 fix.go:54] fixHost starting: 
I0520 03:20:13.007194    6152 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
W0520 03:20:13.007201    6152 fix.go:138] unexpected machine state, will restart: <nil>
I0520 03:20:13.015576    6152 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
I0520 03:20:13.019603    6152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
I0520 03:20:13.021830    6152 main.go:141] libmachine: STDOUT: 
I0520 03:20:13.021849    6152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 03:20:13.021881    6152 fix.go:56] duration metric: took 14.81525ms for fixHost
I0520 03:20:13.021885    6152 start.go:83] releasing machines lock for "functional-357000", held for 14.831375ms
W0520 03:20:13.021890    6152 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 03:20:13.021929    6152 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 03:20:13.021934    6152 start.go:728] Will try again in 5 seconds ...
I0520 03:20:18.024057    6152 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 03:20:18.024537    6152 start.go:364] duration metric: took 367.833µs to acquireMachinesLock for "functional-357000"
I0520 03:20:18.024689    6152 start.go:96] Skipping create...Using existing machine configuration
I0520 03:20:18.024704    6152 fix.go:54] fixHost starting: 
I0520 03:20:18.025458    6152 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
W0520 03:20:18.025477    6152 fix.go:138] unexpected machine state, will restart: <nil>
I0520 03:20:18.029094    6152 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
I0520 03:20:18.033205    6152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
I0520 03:20:18.042364    6152 main.go:141] libmachine: STDOUT: 
I0520 03:20:18.042414    6152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 03:20:18.042509    6152 fix.go:56] duration metric: took 17.807833ms for fixHost
I0520 03:20:18.042523    6152 start.go:83] releasing machines lock for "functional-357000", held for 17.971166ms
W0520 03:20:18.042703    6152 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 03:20:18.049733    6152 out.go:177] 
W0520 03:20:18.054029    6152 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 03:20:18.054051    6152 out.go:239] * 
W0520 03:20:18.056529    6152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 03:20:18.063953    6152 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1913607129/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | -p download-only-699000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-699000                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | -p download-only-911000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-911000                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-699000                                                  | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| delete  | -p download-only-911000                                                  | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | --download-only -p                                                       | binary-mirror-980000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | binary-mirror-980000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50846                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-980000                                                  | binary-mirror-980000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| addons  | disable dashboard -p                                                     | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | addons-091000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | addons-091000                                                            |                      |         |         |                     |                     |
| start   | -p addons-091000 --wait=true                                             | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-091000                                                         | addons-091000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | -p nospam-498000 -n=1 --memory=2250 --wait=false                         | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-498000 --log_dir                                                  | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-498000                                                         | nospam-498000        | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-357000 cache add                                              | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | minikube-local-cache-test:functional-357000                              |                      |         |         |                     |                     |
| cache   | functional-357000 cache delete                                           | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | minikube-local-cache-test:functional-357000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
| ssh     | functional-357000 ssh sudo                                               | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-357000                                                        | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-357000 ssh                                                    | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-357000 cache reload                                           | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
| ssh     | functional-357000 ssh                                                    | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 20 May 24 03:20 PDT | 20 May 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-357000 kubectl --                                             | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | --context functional-357000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-357000                                                     | functional-357000    | jenkins | v1.33.1 | 20 May 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/20 03:20:12
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0520 03:20:12.919702    6152 out.go:291] Setting OutFile to fd 1 ...
I0520 03:20:12.919821    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:20:12.919822    6152 out.go:304] Setting ErrFile to fd 2...
I0520 03:20:12.919824    6152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:20:12.919968    6152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:20:12.920989    6152 out.go:298] Setting JSON to false
I0520 03:20:12.936890    6152 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4783,"bootTime":1716195629,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0520 03:20:12.936951    6152 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0520 03:20:12.943707    6152 out.go:177] * [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0520 03:20:12.950663    6152 out.go:177]   - MINIKUBE_LOCATION=18925
I0520 03:20:12.950711    6152 notify.go:220] Checking for updates...
I0520 03:20:12.959582    6152 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
I0520 03:20:12.963534    6152 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0520 03:20:12.966637    6152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0520 03:20:12.969621    6152 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
I0520 03:20:12.972607    6152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0520 03:20:12.975983    6152 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:20:12.976038    6152 driver.go:392] Setting default libvirt URI to qemu:///system
I0520 03:20:12.980483    6152 out.go:177] * Using the qemu2 driver based on existing profile
I0520 03:20:12.987561    6152 start.go:297] selected driver: qemu2
I0520 03:20:12.987565    6152 start.go:901] validating driver "qemu2" against &{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 03:20:12.987614    6152 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0520 03:20:12.989910    6152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0520 03:20:12.989931    6152 cni.go:84] Creating CNI manager for ""
I0520 03:20:12.989938    6152 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0520 03:20:12.989985    6152 start.go:340] cluster config:
{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0520 03:20:12.994299    6152 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0520 03:20:13.002538    6152 out.go:177] * Starting "functional-357000" primary control-plane node in "functional-357000" cluster
I0520 03:20:13.006439    6152 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0520 03:20:13.006459    6152 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0520 03:20:13.006468    6152 cache.go:56] Caching tarball of preloaded images
I0520 03:20:13.006533    6152 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0520 03:20:13.006537    6152 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0520 03:20:13.006590    6152 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/functional-357000/config.json ...
I0520 03:20:13.007016    6152 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 03:20:13.007051    6152 start.go:364] duration metric: took 30.334µs to acquireMachinesLock for "functional-357000"
I0520 03:20:13.007060    6152 start.go:96] Skipping create...Using existing machine configuration
I0520 03:20:13.007067    6152 fix.go:54] fixHost starting: 
I0520 03:20:13.007194    6152 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
W0520 03:20:13.007201    6152 fix.go:138] unexpected machine state, will restart: <nil>
I0520 03:20:13.015576    6152 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
I0520 03:20:13.019603    6152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
I0520 03:20:13.021830    6152 main.go:141] libmachine: STDOUT: 
I0520 03:20:13.021849    6152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 03:20:13.021881    6152 fix.go:56] duration metric: took 14.81525ms for fixHost
I0520 03:20:13.021885    6152 start.go:83] releasing machines lock for "functional-357000", held for 14.831375ms
W0520 03:20:13.021890    6152 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 03:20:13.021929    6152 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 03:20:13.021934    6152 start.go:728] Will try again in 5 seconds ...
I0520 03:20:18.024057    6152 start.go:360] acquireMachinesLock for functional-357000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0520 03:20:18.024537    6152 start.go:364] duration metric: took 367.833µs to acquireMachinesLock for "functional-357000"
I0520 03:20:18.024689    6152 start.go:96] Skipping create...Using existing machine configuration
I0520 03:20:18.024704    6152 fix.go:54] fixHost starting: 
I0520 03:20:18.025458    6152 fix.go:112] recreateIfNeeded on functional-357000: state=Stopped err=<nil>
W0520 03:20:18.025477    6152 fix.go:138] unexpected machine state, will restart: <nil>
I0520 03:20:18.029094    6152 out.go:177] * Restarting existing qemu2 VM for "functional-357000" ...
I0520 03:20:18.033205    6152 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:dd:40:db:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/functional-357000/disk.qcow2
I0520 03:20:18.042364    6152 main.go:141] libmachine: STDOUT: 
I0520 03:20:18.042414    6152 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0520 03:20:18.042509    6152 fix.go:56] duration metric: took 17.807833ms for fixHost
I0520 03:20:18.042523    6152 start.go:83] releasing machines lock for "functional-357000", held for 17.971166ms
W0520 03:20:18.042703    6152 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-357000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0520 03:20:18.049733    6152 out.go:177] 
W0520 03:20:18.054029    6152 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0520 03:20:18.054051    6152 out.go:239] * 
W0520 03:20:18.056529    6152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 03:20:18.063953    6152 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-357000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-357000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.836875ms)

                                                
                                                
** stderr ** 
	error: context "functional-357000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-357000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-357000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-357000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-357000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-357000 --alsologtostderr -v=1] stderr:
I0520 03:21:05.548166    6487 out.go:291] Setting OutFile to fd 1 ...
I0520 03:21:05.548562    6487 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:05.548566    6487 out.go:304] Setting ErrFile to fd 2...
I0520 03:21:05.548568    6487 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:05.548723    6487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:21:05.548950    6487 mustload.go:65] Loading cluster: functional-357000
I0520 03:21:05.549145    6487 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:21:05.552027    6487 out.go:177] * The control-plane node functional-357000 host is not running: state=Stopped
I0520 03:21:05.555904    6487 out.go:177]   To start a cluster, run: "minikube start -p functional-357000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (31.325125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 status: exit status 7 (28.711458ms)

                                                
                                                
-- stdout --
	functional-357000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-357000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.598958ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-357000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 status -o json: exit status 7 (29.183125ms)

                                                
                                                
-- stdout --
	{"Name":"functional-357000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-357000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (28.934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-357000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-357000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.613333ms)

                                                
                                                
** stderr ** 
	error: context "functional-357000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-357000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-357000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-357000 describe po hello-node-connect: exit status 1 (28.240458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-357000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-357000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-357000 logs -l app=hello-node-connect: exit status 1 (26.545042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-357000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-357000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-357000 describe svc hello-node-connect: exit status 1 (26.016417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-357000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (29.250125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-357000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (29.325708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "echo hello": exit status 83 (41.684042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n"*. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "cat /etc/hostname": exit status 83 (42.799833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-357000"- but got *"* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n"*. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (31.751083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.034459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-357000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 "sudo cat /home/docker/cp-test.txt": exit status 83 (52.061875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-357000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-357000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cp functional-357000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd212671594/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 cp functional-357000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd212671594/001/cp-test.txt: exit status 83 (53.4895ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-357000 cp functional-357000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd212671594/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 "sudo cat /home/docker/cp-test.txt": exit status 83 (39.044917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd212671594/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (39.994542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-357000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (37.822542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-357000 ssh -n functional-357000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-357000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-357000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5818/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/test/nested/copy/5818/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/test/nested/copy/5818/hosts": exit status 83 (38.506084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/test/nested/copy/5818/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-357000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-357000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (29.326542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5818.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/5818.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/5818.pem": exit status 83 (37.490292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/5818.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo cat /etc/ssl/certs/5818.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/5818.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-357000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-357000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5818.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /usr/share/ca-certificates/5818.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /usr/share/ca-certificates/5818.pem": exit status 83 (39.6545ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/5818.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo cat /usr/share/ca-certificates/5818.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/5818.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-357000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-357000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (45.7645ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-357000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-357000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/58182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/58182.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/58182.pem": exit status 83 (40.575916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/58182.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo cat /etc/ssl/certs/58182.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/58182.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-357000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-357000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/58182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /usr/share/ca-certificates/58182.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /usr/share/ca-certificates/58182.pem": exit status 83 (38.727584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/58182.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo cat /usr/share/ca-certificates/58182.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/58182.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-357000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-357000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.615208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-357000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-357000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (28.990209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-357000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-357000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.901792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-357000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-357000 -n functional-357000: exit status 7 (30.80025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-357000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo systemctl is-active crio": exit status 83 (39.396875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 version -o=json --components: exit status 83 (40.818292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-357000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-357000 image ls --format short --alsologtostderr:
I0520 03:21:05.935697    6502 out.go:291] Setting OutFile to fd 1 ...
I0520 03:21:05.935883    6502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:05.935886    6502 out.go:304] Setting ErrFile to fd 2...
I0520 03:21:05.935889    6502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:05.936010    6502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:21:05.936509    6502 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:21:05.936571    6502 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-357000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-357000 image ls --format table --alsologtostderr:
I0520 03:21:06.040680    6508 out.go:291] Setting OutFile to fd 1 ...
I0520 03:21:06.040815    6508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:06.040819    6508 out.go:304] Setting ErrFile to fd 2...
I0520 03:21:06.040821    6508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:06.040950    6508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:21:06.041379    6508 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:21:06.041446    6508 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-357000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-357000 image ls --format json --alsologtostderr:
I0520 03:21:06.005742    6506 out.go:291] Setting OutFile to fd 1 ...
I0520 03:21:06.005913    6506 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:06.005918    6506 out.go:304] Setting ErrFile to fd 2...
I0520 03:21:06.005920    6506 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:06.006056    6506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:21:06.006470    6506 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:21:06.006532    6506 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-357000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-357000 image ls --format yaml --alsologtostderr:
I0520 03:21:05.970426    6504 out.go:291] Setting OutFile to fd 1 ...
I0520 03:21:05.970580    6504 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:05.970583    6504 out.go:304] Setting ErrFile to fd 2...
I0520 03:21:05.970586    6504 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:05.970709    6504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:21:05.971115    6504 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:21:05.971176    6504 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh pgrep buildkitd: exit status 83 (40.668209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image build -t localhost/my-image:functional-357000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-357000 image build -t localhost/my-image:functional-357000 testdata/build --alsologtostderr:
I0520 03:21:06.116074    6512 out.go:291] Setting OutFile to fd 1 ...
I0520 03:21:06.116463    6512 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:06.116467    6512 out.go:304] Setting ErrFile to fd 2...
I0520 03:21:06.116469    6512 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:21:06.116653    6512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:21:06.117060    6512 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:21:06.117521    6512 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:21:06.117744    6512 build_images.go:133] succeeded building to: 
I0520 03:21:06.117748    6512 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls
functional_test.go:442: expected "localhost/my-image:functional-357000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-357000 docker-env) && out/minikube-darwin-arm64 status -p functional-357000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-357000 docker-env) && out/minikube-darwin-arm64 status -p functional-357000": exit status 1 (47.622292ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2: exit status 83 (41.714667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:21:05.809315    6496 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:21:05.809941    6496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.809945    6496 out.go:304] Setting ErrFile to fd 2...
	I0520 03:21:05.809947    6496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.810101    6496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:21:05.810313    6496 mustload.go:65] Loading cluster: functional-357000
	I0520 03:21:05.810493    6496 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:21:05.814980    6496 out.go:177] * The control-plane node functional-357000 host is not running: state=Stopped
	I0520 03:21:05.818944    6496 out.go:177]   To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2: exit status 83 (41.569416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:21:05.893688    6500 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:21:05.893824    6500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.893828    6500 out.go:304] Setting ErrFile to fd 2...
	I0520 03:21:05.893839    6500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.893960    6500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:21:05.894152    6500 mustload.go:65] Loading cluster: functional-357000
	I0520 03:21:05.894345    6500 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:21:05.899005    6500 out.go:177] * The control-plane node functional-357000 host is not running: state=Stopped
	I0520 03:21:05.903043    6500 out.go:177]   To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2: exit status 83 (41.668583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:21:05.851451    6498 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:21:05.851621    6498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.851624    6498 out.go:304] Setting ErrFile to fd 2...
	I0520 03:21:05.851626    6498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.851755    6498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:21:05.851971    6498 mustload.go:65] Loading cluster: functional-357000
	I0520 03:21:05.852167    6498 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:21:05.857019    6498 out.go:177] * The control-plane node functional-357000 host is not running: state=Stopped
	I0520 03:21:05.861006    6498 out.go:177]   To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-357000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-357000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-357000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.061375ms)

                                                
                                                
** stderr ** 
	error: context "functional-357000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-357000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 service list: exit status 83 (46.009625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-357000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 service list -o json: exit status 83 (40.8665ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-357000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 service --namespace=default --https --url hello-node: exit status 83 (44.755625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-357000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 service hello-node --url --format={{.IP}}: exit status 83 (41.89075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-357000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 service hello-node --url: exit status 83 (41.7185ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-357000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test.go:1565: failed to parse "* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"": parse "* The control-plane node functional-357000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-357000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0520 03:20:19.810578    6274 out.go:291] Setting OutFile to fd 1 ...
I0520 03:20:19.810779    6274 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:20:19.810783    6274 out.go:304] Setting ErrFile to fd 2...
I0520 03:20:19.810785    6274 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:20:19.810915    6274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:20:19.811123    6274 mustload.go:65] Loading cluster: functional-357000
I0520 03:20:19.811332    6274 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:20:19.816026    6274 out.go:177] * The control-plane node functional-357000 host is not running: state=Stopped
I0520 03:20:19.827980    6274 out.go:177]   To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
stdout: * The control-plane node functional-357000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-357000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 6275: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-357000": client config: context "functional-357000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (89.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-357000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-357000 get svc nginx-svc: exit status 1 (68.017292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-357000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-357000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (89.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image load --daemon gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-357000 image load --daemon gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr: (1.297832333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-357000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image load --daemon gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-357000 image load --daemon gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr: (1.302204833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-357000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.253383208s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-357000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image load --daemon gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-357000 image load --daemon gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr: (1.166105709s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-357000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image save gcr.io/google-containers/addon-resizer:functional-357000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-357000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.030277042s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-465000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-465000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.904972125s)

                                                
                                                
-- stdout --
	* [ha-465000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-465000" primary control-plane node in "ha-465000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-465000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:22:53.380573    6578 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:22:53.380733    6578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:22:53.380736    6578 out.go:304] Setting ErrFile to fd 2...
	I0520 03:22:53.380739    6578 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:22:53.380844    6578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:22:53.381908    6578 out.go:298] Setting JSON to false
	I0520 03:22:53.398074    6578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4944,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:22:53.398147    6578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:22:53.402578    6578 out.go:177] * [ha-465000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:22:53.409546    6578 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:22:53.409615    6578 notify.go:220] Checking for updates...
	I0520 03:22:53.413483    6578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:22:53.416546    6578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:22:53.419514    6578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:22:53.422514    6578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:22:53.425528    6578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:22:53.427061    6578 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:22:53.431512    6578 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:22:53.438333    6578 start.go:297] selected driver: qemu2
	I0520 03:22:53.438340    6578 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:22:53.438346    6578 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:22:53.440618    6578 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:22:53.443589    6578 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:22:53.446638    6578 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:22:53.446653    6578 cni.go:84] Creating CNI manager for ""
	I0520 03:22:53.446657    6578 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 03:22:53.446660    6578 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 03:22:53.446688    6578 start.go:340] cluster config:
	{Name:ha-465000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:22:53.451220    6578 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:22:53.458542    6578 out.go:177] * Starting "ha-465000" primary control-plane node in "ha-465000" cluster
	I0520 03:22:53.462576    6578 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:22:53.462593    6578 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:22:53.462606    6578 cache.go:56] Caching tarball of preloaded images
	I0520 03:22:53.462668    6578 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:22:53.462674    6578 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:22:53.462872    6578 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/ha-465000/config.json ...
	I0520 03:22:53.462884    6578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/ha-465000/config.json: {Name:mkbb8eb9871077570fa75c0208f1c101bbad61b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:22:53.463175    6578 start.go:360] acquireMachinesLock for ha-465000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:22:53.463210    6578 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "ha-465000"
	I0520 03:22:53.463223    6578 start.go:93] Provisioning new machine with config: &{Name:ha-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:22:53.463250    6578 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:22:53.466525    6578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:22:53.484250    6578 start.go:159] libmachine.API.Create for "ha-465000" (driver="qemu2")
	I0520 03:22:53.484276    6578 client.go:168] LocalClient.Create starting
	I0520 03:22:53.484341    6578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:22:53.484374    6578 main.go:141] libmachine: Decoding PEM data...
	I0520 03:22:53.484384    6578 main.go:141] libmachine: Parsing certificate...
	I0520 03:22:53.484421    6578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:22:53.484443    6578 main.go:141] libmachine: Decoding PEM data...
	I0520 03:22:53.484449    6578 main.go:141] libmachine: Parsing certificate...
	I0520 03:22:53.484827    6578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:22:53.611494    6578 main.go:141] libmachine: Creating SSH key...
	I0520 03:22:53.766593    6578 main.go:141] libmachine: Creating Disk image...
	I0520 03:22:53.766600    6578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:22:53.766786    6578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:22:53.779787    6578 main.go:141] libmachine: STDOUT: 
	I0520 03:22:53.779809    6578 main.go:141] libmachine: STDERR: 
	I0520 03:22:53.779872    6578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2 +20000M
	I0520 03:22:53.790840    6578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:22:53.790858    6578 main.go:141] libmachine: STDERR: 
	I0520 03:22:53.790875    6578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:22:53.790882    6578 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:22:53.790907    6578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:2e:4b:a1:b7:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:22:53.792622    6578 main.go:141] libmachine: STDOUT: 
	I0520 03:22:53.792645    6578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:22:53.792669    6578 client.go:171] duration metric: took 308.390375ms to LocalClient.Create
	I0520 03:22:55.794796    6578 start.go:128] duration metric: took 2.331563083s to createHost
	I0520 03:22:55.794854    6578 start.go:83] releasing machines lock for "ha-465000", held for 2.331670833s
	W0520 03:22:55.794921    6578 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:22:55.805232    6578 out.go:177] * Deleting "ha-465000" in qemu2 ...
	W0520 03:22:55.823340    6578 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:22:55.823381    6578 start.go:728] Will try again in 5 seconds ...
	I0520 03:23:00.825475    6578 start.go:360] acquireMachinesLock for ha-465000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:23:00.825949    6578 start.go:364] duration metric: took 375.542µs to acquireMachinesLock for "ha-465000"
	I0520 03:23:00.826115    6578 start.go:93] Provisioning new machine with config: &{Name:ha-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:23:00.826374    6578 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:23:00.836946    6578 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:23:00.886455    6578 start.go:159] libmachine.API.Create for "ha-465000" (driver="qemu2")
	I0520 03:23:00.886515    6578 client.go:168] LocalClient.Create starting
	I0520 03:23:00.886642    6578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:23:00.886718    6578 main.go:141] libmachine: Decoding PEM data...
	I0520 03:23:00.886736    6578 main.go:141] libmachine: Parsing certificate...
	I0520 03:23:00.886795    6578 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:23:00.886838    6578 main.go:141] libmachine: Decoding PEM data...
	I0520 03:23:00.886859    6578 main.go:141] libmachine: Parsing certificate...
	I0520 03:23:00.887363    6578 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:23:01.026740    6578 main.go:141] libmachine: Creating SSH key...
	I0520 03:23:01.186711    6578 main.go:141] libmachine: Creating Disk image...
	I0520 03:23:01.186727    6578 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:23:01.186977    6578 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:23:01.200215    6578 main.go:141] libmachine: STDOUT: 
	I0520 03:23:01.200237    6578 main.go:141] libmachine: STDERR: 
	I0520 03:23:01.200305    6578 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2 +20000M
	I0520 03:23:01.211245    6578 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:23:01.211263    6578 main.go:141] libmachine: STDERR: 
	I0520 03:23:01.211275    6578 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:23:01.211285    6578 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:23:01.211336    6578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:74:6a:1f:ff:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:23:01.213099    6578 main.go:141] libmachine: STDOUT: 
	I0520 03:23:01.213120    6578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:23:01.213138    6578 client.go:171] duration metric: took 326.623792ms to LocalClient.Create
	I0520 03:23:03.215271    6578 start.go:128] duration metric: took 2.388908625s to createHost
	I0520 03:23:03.215337    6578 start.go:83] releasing machines lock for "ha-465000", held for 2.389382166s
	W0520 03:23:03.215770    6578 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-465000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-465000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:23:03.227674    6578 out.go:177] 
	W0520 03:23:03.231845    6578 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:23:03.231886    6578 out.go:239] * 
	* 
	W0520 03:23:03.234430    6578 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:23:03.242754    6578 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-465000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (66.772833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (109.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.110291ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-465000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- rollout status deployment/busybox: exit status 1 (56.371959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.844292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.212375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.16275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.838541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.677ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.504208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.142375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.85225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.025666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.029416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.146333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.856125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.059958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.820458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.604833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.270875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (109.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-465000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.018917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-465000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.290625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-465000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-465000 -v=7 --alsologtostderr: exit status 83 (41.289083ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-465000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-465000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:52.499458    6683 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:52.500038    6683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.500041    6683 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:52.500048    6683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.500221    6683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:52.500464    6683 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:52.500661    6683 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:52.505205    6683 out.go:177] * The control-plane node ha-465000 host is not running: state=Stopped
	I0520 03:24:52.509108    6683 out.go:177]   To start a cluster, run: "minikube start -p ha-465000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-465000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.436917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-465000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-465000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.674ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-465000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-465000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-465000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.311375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-465000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-465000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (28.987917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status --output json -v=7 --alsologtostderr: exit status 7 (29.269166ms)

                                                
                                                
-- stdout --
	{"Name":"ha-465000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:52.726295    6696 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:52.726467    6696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.726470    6696 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:52.726473    6696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.726607    6696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:52.726726    6696 out.go:298] Setting JSON to true
	I0520 03:24:52.726736    6696 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:52.726798    6696 notify.go:220] Checking for updates...
	I0520 03:24:52.726907    6696 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:52.726917    6696 status.go:255] checking status of ha-465000 ...
	I0520 03:24:52.727149    6696 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:24:52.727153    6696 status.go:343] host is not running, skipping remaining checks
	I0520 03:24:52.727155    6696 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-465000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.26025ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:52.785555    6700 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:52.786051    6700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.786055    6700 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:52.786057    6700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.786221    6700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:52.786457    6700 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:52.786646    6700 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:52.791517    6700 out.go:177] 
	W0520 03:24:52.794521    6700 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0520 03:24:52.794526    6700 out.go:239] * 
	* 
	W0520 03:24:52.796528    6700 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:24:52.800473    6700 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-465000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (29.26725ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:52.832880    6702 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:52.833047    6702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.833051    6702 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:52.833053    6702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.833176    6702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:52.833299    6702 out.go:298] Setting JSON to false
	I0520 03:24:52.833308    6702 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:52.833367    6702 notify.go:220] Checking for updates...
	I0520 03:24:52.833516    6702 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:52.833523    6702 status.go:255] checking status of ha-465000 ...
	I0520 03:24:52.833723    6702 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:24:52.833728    6702 status.go:343] host is not running, skipping remaining checks
	I0520 03:24:52.833730    6702 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr": ha-465000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr": ha-465000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr": ha-465000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr": ha-465000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.051125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-465000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.901041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.631792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:52.991065    6712 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:52.992123    6712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.992127    6712 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:52.992129    6712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:52.992256    6712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:52.992508    6712 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:52.992689    6712 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:52.997271    6712 out.go:177] 
	W0520 03:24:53.001216    6712 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0520 03:24:53.001221    6712 out.go:239] * 
	* 
	W0520 03:24:53.003073    6712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:24:53.007217    6712 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0520 03:24:52.991065    6712 out.go:291] Setting OutFile to fd 1 ...
I0520 03:24:52.992123    6712 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:24:52.992127    6712 out.go:304] Setting ErrFile to fd 2...
I0520 03:24:52.992129    6712 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:24:52.992256    6712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:24:52.992508    6712 mustload.go:65] Loading cluster: ha-465000
I0520 03:24:52.992689    6712 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:24:52.997271    6712 out.go:177] 
W0520 03:24:53.001216    6712 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0520 03:24:53.001221    6712 out.go:239] * 
* 
W0520 03:24:53.003073    6712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 03:24:53.007217    6712 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-465000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (29.418792ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:53.039949    6714 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:53.040098    6714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:53.040101    6714 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:53.040103    6714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:53.040229    6714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:53.040356    6714 out.go:298] Setting JSON to false
	I0520 03:24:53.040366    6714 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:53.040429    6714 notify.go:220] Checking for updates...
	I0520 03:24:53.040539    6714 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:53.040546    6714 status.go:255] checking status of ha-465000 ...
	I0520 03:24:53.040752    6714 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:24:53.040756    6714 status.go:343] host is not running, skipping remaining checks
	I0520 03:24:53.040759    6714 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (74.084458ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:53.999161    6716 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:53.999381    6716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:53.999385    6716 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:53.999389    6716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:53.999542    6716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:53.999707    6716 out.go:298] Setting JSON to false
	I0520 03:24:53.999718    6716 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:53.999757    6716 notify.go:220] Checking for updates...
	I0520 03:24:53.999987    6716 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:53.999997    6716 status.go:255] checking status of ha-465000 ...
	I0520 03:24:54.000281    6716 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:24:54.000287    6716 status.go:343] host is not running, skipping remaining checks
	I0520 03:24:54.000294    6716 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (73.801666ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:55.109304    6718 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:55.109542    6718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:55.109547    6718 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:55.109550    6718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:55.109694    6718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:55.109876    6718 out.go:298] Setting JSON to false
	I0520 03:24:55.109889    6718 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:55.109937    6718 notify.go:220] Checking for updates...
	I0520 03:24:55.110178    6718 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:55.110187    6718 status.go:255] checking status of ha-465000 ...
	I0520 03:24:55.110473    6718 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:24:55.110478    6718 status.go:343] host is not running, skipping remaining checks
	I0520 03:24:55.110481    6718 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (73.579792ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:24:58.487541    6720 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:24:58.487727    6720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:58.487731    6720 out.go:304] Setting ErrFile to fd 2...
	I0520 03:24:58.487734    6720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:24:58.487887    6720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:24:58.488030    6720 out.go:298] Setting JSON to false
	I0520 03:24:58.488043    6720 mustload.go:65] Loading cluster: ha-465000
	I0520 03:24:58.488076    6720 notify.go:220] Checking for updates...
	I0520 03:24:58.488297    6720 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:24:58.488312    6720 status.go:255] checking status of ha-465000 ...
	I0520 03:24:58.488590    6720 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:24:58.488595    6720 status.go:343] host is not running, skipping remaining checks
	I0520 03:24:58.488598    6720 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (73.887083ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:01.428146    6722 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:01.428360    6722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:01.428364    6722 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:01.428368    6722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:01.428623    6722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:01.428799    6722 out.go:298] Setting JSON to false
	I0520 03:25:01.428812    6722 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:01.428862    6722 notify.go:220] Checking for updates...
	I0520 03:25:01.429094    6722 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:01.429103    6722 status.go:255] checking status of ha-465000 ...
	I0520 03:25:01.429423    6722 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:25:01.429429    6722 status.go:343] host is not running, skipping remaining checks
	I0520 03:25:01.429432    6722 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (72.659625ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:05.362005    6724 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:05.362233    6724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:05.362238    6724 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:05.362241    6724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:05.362421    6724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:05.362588    6724 out.go:298] Setting JSON to false
	I0520 03:25:05.362601    6724 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:05.362638    6724 notify.go:220] Checking for updates...
	I0520 03:25:05.362877    6724 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:05.362886    6724 status.go:255] checking status of ha-465000 ...
	I0520 03:25:05.363191    6724 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:25:05.363196    6724 status.go:343] host is not running, skipping remaining checks
	I0520 03:25:05.363200    6724 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (72.923459ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:16.194900    6731 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:16.195102    6731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:16.195106    6731 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:16.195110    6731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:16.195273    6731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:16.195431    6731 out.go:298] Setting JSON to false
	I0520 03:25:16.195442    6731 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:16.195478    6731 notify.go:220] Checking for updates...
	I0520 03:25:16.195710    6731 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:16.195718    6731 status.go:255] checking status of ha-465000 ...
	I0520 03:25:16.196011    6731 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:25:16.196017    6731 status.go:343] host is not running, skipping remaining checks
	I0520 03:25:16.196020    6731 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (73.853417ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:30.859156    6742 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:30.859361    6742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:30.859365    6742 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:30.859368    6742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:30.859529    6742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:30.859692    6742 out.go:298] Setting JSON to false
	I0520 03:25:30.859704    6742 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:30.859749    6742 notify.go:220] Checking for updates...
	I0520 03:25:30.859986    6742 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:30.859995    6742 status.go:255] checking status of ha-465000 ...
	I0520 03:25:30.860310    6742 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:25:30.860316    6742 status.go:343] host is not running, skipping remaining checks
	I0520 03:25:30.860319    6742 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (32.671792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (37.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-465000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-465000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.191625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-465000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-465000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-465000 -v=7 --alsologtostderr: (3.223049542s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-465000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-465000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.220000625s)

                                                
                                                
-- stdout --
	* [ha-465000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-465000" primary control-plane node in "ha-465000" cluster
	* Restarting existing qemu2 VM for "ha-465000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-465000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:34.309846    6772 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:34.310031    6772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:34.310035    6772 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:34.310038    6772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:34.310206    6772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:34.311430    6772 out.go:298] Setting JSON to false
	I0520 03:25:34.330738    6772 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5105,"bootTime":1716195629,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:25:34.330810    6772 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:25:34.334421    6772 out.go:177] * [ha-465000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:25:34.342475    6772 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:25:34.342516    6772 notify.go:220] Checking for updates...
	I0520 03:25:34.346359    6772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:25:34.349398    6772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:25:34.352442    6772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:25:34.355412    6772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:25:34.358391    6772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:25:34.361756    6772 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:34.361822    6772 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:25:34.366325    6772 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:25:34.373389    6772 start.go:297] selected driver: qemu2
	I0520 03:25:34.373396    6772 start.go:901] validating driver "qemu2" against &{Name:ha-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:25:34.373469    6772 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:25:34.375872    6772 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:25:34.375894    6772 cni.go:84] Creating CNI manager for ""
	I0520 03:25:34.375900    6772 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 03:25:34.375945    6772 start.go:340] cluster config:
	{Name:ha-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-465000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:25:34.380334    6772 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:25:34.387437    6772 out.go:177] * Starting "ha-465000" primary control-plane node in "ha-465000" cluster
	I0520 03:25:34.390339    6772 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:25:34.390367    6772 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:25:34.390382    6772 cache.go:56] Caching tarball of preloaded images
	I0520 03:25:34.390451    6772 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:25:34.390456    6772 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:25:34.390519    6772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/ha-465000/config.json ...
	I0520 03:25:34.390914    6772 start.go:360] acquireMachinesLock for ha-465000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:25:34.390953    6772 start.go:364] duration metric: took 31.666µs to acquireMachinesLock for "ha-465000"
	I0520 03:25:34.390964    6772 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:25:34.390972    6772 fix.go:54] fixHost starting: 
	I0520 03:25:34.391098    6772 fix.go:112] recreateIfNeeded on ha-465000: state=Stopped err=<nil>
	W0520 03:25:34.391106    6772 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:25:34.395395    6772 out.go:177] * Restarting existing qemu2 VM for "ha-465000" ...
	I0520 03:25:34.402449    6772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:74:6a:1f:ff:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:25:34.404575    6772 main.go:141] libmachine: STDOUT: 
	I0520 03:25:34.404601    6772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:25:34.404633    6772 fix.go:56] duration metric: took 13.662917ms for fixHost
	I0520 03:25:34.404639    6772 start.go:83] releasing machines lock for "ha-465000", held for 13.680667ms
	W0520 03:25:34.404644    6772 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:25:34.404686    6772 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:25:34.404693    6772 start.go:728] Will try again in 5 seconds ...
	I0520 03:25:39.406729    6772 start.go:360] acquireMachinesLock for ha-465000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:25:39.407099    6772 start.go:364] duration metric: took 291.75µs to acquireMachinesLock for "ha-465000"
	I0520 03:25:39.407225    6772 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:25:39.407266    6772 fix.go:54] fixHost starting: 
	I0520 03:25:39.407969    6772 fix.go:112] recreateIfNeeded on ha-465000: state=Stopped err=<nil>
	W0520 03:25:39.407994    6772 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:25:39.412492    6772 out.go:177] * Restarting existing qemu2 VM for "ha-465000" ...
	I0520 03:25:39.419541    6772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:74:6a:1f:ff:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:25:39.428372    6772 main.go:141] libmachine: STDOUT: 
	I0520 03:25:39.428491    6772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:25:39.428569    6772 fix.go:56] duration metric: took 21.303541ms for fixHost
	I0520 03:25:39.428595    6772 start.go:83] releasing machines lock for "ha-465000", held for 21.468667ms
	W0520 03:25:39.428789    6772 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-465000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-465000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:25:39.437290    6772 out.go:177] 
	W0520 03:25:39.441414    6772 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:25:39.441446    6772 out.go:239] * 
	* 
	W0520 03:25:39.444261    6772 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:25:39.452407    6772 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-465000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-465000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (33.047709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.405125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-465000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-465000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:39.594259    6784 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:39.594679    6784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:39.594684    6784 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:39.594686    6784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:39.594828    6784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:39.595045    6784 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:39.595224    6784 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:39.598065    6784 out.go:177] * The control-plane node ha-465000 host is not running: state=Stopped
	I0520 03:25:39.601031    6784 out.go:177]   To start a cluster, run: "minikube start -p ha-465000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-465000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (29.4375ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:39.632823    6786 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:39.632974    6786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:39.632977    6786 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:39.632979    6786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:39.633095    6786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:39.633206    6786 out.go:298] Setting JSON to false
	I0520 03:25:39.633215    6786 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:39.633279    6786 notify.go:220] Checking for updates...
	I0520 03:25:39.633406    6786 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:39.633413    6786 status.go:255] checking status of ha-465000 ...
	I0520 03:25:39.633611    6786 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:25:39.633615    6786 status.go:343] host is not running, skipping remaining checks
	I0520 03:25:39.633617    6786 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.456916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-465000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (31.401292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-465000 stop -v=7 --alsologtostderr: (3.408410542s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr: exit status 7 (64.003125ms)

                                                
                                                
-- stdout --
	ha-465000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:43.238017    6814 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:43.238219    6814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:43.238223    6814 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:43.238227    6814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:43.238389    6814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:43.238538    6814 out.go:298] Setting JSON to false
	I0520 03:25:43.238549    6814 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:43.238593    6814 notify.go:220] Checking for updates...
	I0520 03:25:43.238824    6814 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:43.238833    6814 status.go:255] checking status of ha-465000 ...
	I0520 03:25:43.239089    6814 status.go:330] ha-465000 host status = "Stopped" (err=<nil>)
	I0520 03:25:43.239094    6814 status.go:343] host is not running, skipping remaining checks
	I0520 03:25:43.239097    6814 status.go:257] ha-465000 status: &{Name:ha-465000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr": ha-465000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr": ha-465000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-465000 status -v=7 --alsologtostderr": ha-465000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (31.856292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-465000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-465000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.17948375s)

                                                
                                                
-- stdout --
	* [ha-465000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-465000" primary control-plane node in "ha-465000" cluster
	* Restarting existing qemu2 VM for "ha-465000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-465000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:43.299138    6818 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:43.299261    6818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:43.299265    6818 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:43.299267    6818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:43.299410    6818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:43.300381    6818 out.go:298] Setting JSON to false
	I0520 03:25:43.316255    6818 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5114,"bootTime":1716195629,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:25:43.316320    6818 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:25:43.321060    6818 out.go:177] * [ha-465000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:25:43.329164    6818 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:25:43.329226    6818 notify.go:220] Checking for updates...
	I0520 03:25:43.332999    6818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:25:43.336008    6818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:25:43.339062    6818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:25:43.342012    6818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:25:43.344992    6818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:25:43.348339    6818 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:43.348576    6818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:25:43.352960    6818 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:25:43.360052    6818 start.go:297] selected driver: qemu2
	I0520 03:25:43.360060    6818 start.go:901] validating driver "qemu2" against &{Name:ha-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-465000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:25:43.360120    6818 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:25:43.362292    6818 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:25:43.362315    6818 cni.go:84] Creating CNI manager for ""
	I0520 03:25:43.362319    6818 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 03:25:43.362369    6818 start.go:340] cluster config:
	{Name:ha-465000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-465000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:25:43.366618    6818 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:25:43.374015    6818 out.go:177] * Starting "ha-465000" primary control-plane node in "ha-465000" cluster
	I0520 03:25:43.376927    6818 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:25:43.376941    6818 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:25:43.376950    6818 cache.go:56] Caching tarball of preloaded images
	I0520 03:25:43.376994    6818 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:25:43.376999    6818 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:25:43.377046    6818 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/ha-465000/config.json ...
	I0520 03:25:43.377456    6818 start.go:360] acquireMachinesLock for ha-465000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:25:43.377487    6818 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "ha-465000"
	I0520 03:25:43.377497    6818 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:25:43.377503    6818 fix.go:54] fixHost starting: 
	I0520 03:25:43.377622    6818 fix.go:112] recreateIfNeeded on ha-465000: state=Stopped err=<nil>
	W0520 03:25:43.377631    6818 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:25:43.385872    6818 out.go:177] * Restarting existing qemu2 VM for "ha-465000" ...
	I0520 03:25:43.390087    6818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:74:6a:1f:ff:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:25:43.392155    6818 main.go:141] libmachine: STDOUT: 
	I0520 03:25:43.392175    6818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:25:43.392205    6818 fix.go:56] duration metric: took 14.702584ms for fixHost
	I0520 03:25:43.392210    6818 start.go:83] releasing machines lock for "ha-465000", held for 14.718375ms
	W0520 03:25:43.392216    6818 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:25:43.392254    6818 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:25:43.392259    6818 start.go:728] Will try again in 5 seconds ...
	I0520 03:25:48.394367    6818 start.go:360] acquireMachinesLock for ha-465000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:25:48.394929    6818 start.go:364] duration metric: took 438.917µs to acquireMachinesLock for "ha-465000"
	I0520 03:25:48.395056    6818 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:25:48.395077    6818 fix.go:54] fixHost starting: 
	I0520 03:25:48.395828    6818 fix.go:112] recreateIfNeeded on ha-465000: state=Stopped err=<nil>
	W0520 03:25:48.395856    6818 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:25:48.400405    6818 out.go:177] * Restarting existing qemu2 VM for "ha-465000" ...
	I0520 03:25:48.403558    6818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:74:6a:1f:ff:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/ha-465000/disk.qcow2
	I0520 03:25:48.413503    6818 main.go:141] libmachine: STDOUT: 
	I0520 03:25:48.413584    6818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:25:48.413687    6818 fix.go:56] duration metric: took 18.610417ms for fixHost
	I0520 03:25:48.413708    6818 start.go:83] releasing machines lock for "ha-465000", held for 18.754958ms
	W0520 03:25:48.413879    6818 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-465000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-465000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:25:48.422392    6818 out.go:177] 
	W0520 03:25:48.426311    6818 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:25:48.426338    6818 out.go:239] * 
	* 
	W0520 03:25:48.428680    6818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:25:48.438337    6818 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-465000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (68.230584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-465000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.374292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-465000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-465000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.127375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-465000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-465000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:25:48.651149    6834 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:25:48.651321    6834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:48.651325    6834 out.go:304] Setting ErrFile to fd 2...
	I0520 03:25:48.651327    6834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:25:48.651450    6834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:25:48.651687    6834 mustload.go:65] Loading cluster: ha-465000
	I0520 03:25:48.651868    6834 config.go:182] Loaded profile config "ha-465000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:25:48.655455    6834 out.go:177] * The control-plane node ha-465000 host is not running: state=Stopped
	I0520 03:25:48.659430    6834 out.go:177]   To start a cluster, run: "minikube start -p ha-465000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-465000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.382708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-465000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-465000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-465000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-465000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-465000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-465000 -n ha-465000: exit status 7 (29.090834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-465000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-551000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-551000 --driver=qemu2 : exit status 80 (9.769731875s)

                                                
                                                
-- stdout --
	* [image-551000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-551000" primary control-plane node in "image-551000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-551000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-551000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-551000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-551000 -n image-551000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-551000 -n image-551000: exit status 7 (68.948792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-551000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-469000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-469000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.935182375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0fafe571-e3d2-410f-96d0-c23caa73ca24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-469000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb583a46-e842-41d0-9845-2d20e1313a12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"6a0bebfb-a370-4a78-8571-f1f6c37cdddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig"}}
	{"specversion":"1.0","id":"13c34621-18e9-4c6b-aa99-3a5c26ffa1ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"31ad1186-420e-42dd-8ed2-ac50d09ae0da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"abadc94b-6fa9-4daa-bca3-601ce2954e72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube"}}
	{"specversion":"1.0","id":"e3985d3f-aadd-4712-ad47-f88eda477d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8bf5524-2280-4cb3-993e-136edfd675dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7021cd7-dbce-4966-8ad9-162563525ef5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"dfef3446-55b4-46c5-8a50-e1d83ff83e24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-469000\" primary control-plane node in \"json-output-469000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dfe0969b-ea4b-464f-801c-58d8f04ec565","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"01a163e5-0c96-47de-ae72-ad9e2dc5dd6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-469000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a69c147-5766-4f50-a951-97f81b8910b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"140898f1-074e-450e-83bd-330f191fca67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e24cbc34-d8a7-484f-9b8d-7c972d9907dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-469000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"27458f22-a05a-4d31-bbe0-f579a5eea5dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"8d24cc1e-654b-4b05-821b-0ffc6eaa0553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-469000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.94s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-469000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-469000 --output=json --user=testUser: exit status 83 (76.584375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c396e2a-c66d-42ba-9bcc-3df6e34f9b27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-469000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"871c9926-c2d7-4c79-89e0-2d9550a8940c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-469000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-469000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-469000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-469000 --output=json --user=testUser: exit status 83 (43.329916ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-469000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-469000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-469000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-469000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-935000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-935000 --driver=qemu2 : exit status 80 (9.807941291s)

                                                
                                                
-- stdout --
	* [first-935000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-935000" primary control-plane node in "first-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-935000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-20 03:26:22.144291 -0700 PDT m=+441.656443251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-937000 -n second-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-937000 -n second-937000: exit status 85 (75.885917ms)

                                                
                                                
-- stdout --
	* Profile "second-937000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-937000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-937000" host is not running, skipping log retrieval (state="* Profile \"second-937000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-937000\"")
helpers_test.go:175: Cleaning up "second-937000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-937000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-20 03:26:22.447034 -0700 PDT m=+441.959191085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-935000 -n first-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-935000 -n first-935000: exit status 7 (29.047208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-935000
--- FAIL: TestMinikubeProfile (10.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-132000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-132000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.907366292s)

                                                
                                                
-- stdout --
	* [mount-start-1-132000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-132000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-132000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-132000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-132000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-132000 -n mount-start-1-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-132000 -n mount-start-1-132000: exit status 7 (67.075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.902224125s)

                                                
                                                
-- stdout --
	* [multinode-312000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-312000" primary control-plane node in "multinode-312000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-312000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:26:32.894584    6998 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:26:32.894736    6998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:26:32.894739    6998 out.go:304] Setting ErrFile to fd 2...
	I0520 03:26:32.894741    6998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:26:32.894860    6998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:26:32.895906    6998 out.go:298] Setting JSON to false
	I0520 03:26:32.911794    6998 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5163,"bootTime":1716195629,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:26:32.911857    6998 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:26:32.917319    6998 out.go:177] * [multinode-312000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:26:32.926259    6998 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:26:32.924365    6998 notify.go:220] Checking for updates...
	I0520 03:26:32.934382    6998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:26:32.938269    6998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:26:32.941245    6998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:26:32.944291    6998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:26:32.947210    6998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:26:32.950478    6998 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:26:32.954261    6998 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:26:32.961288    6998 start.go:297] selected driver: qemu2
	I0520 03:26:32.961295    6998 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:26:32.961301    6998 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:26:32.963576    6998 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:26:32.966266    6998 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:26:32.969332    6998 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:26:32.969350    6998 cni.go:84] Creating CNI manager for ""
	I0520 03:26:32.969355    6998 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 03:26:32.969358    6998 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 03:26:32.969402    6998 start.go:340] cluster config:
	{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:26:32.973832    6998 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:26:32.981332    6998 out.go:177] * Starting "multinode-312000" primary control-plane node in "multinode-312000" cluster
	I0520 03:26:32.985236    6998 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:26:32.985250    6998 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:26:32.985261    6998 cache.go:56] Caching tarball of preloaded images
	I0520 03:26:32.985315    6998 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:26:32.985321    6998 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:26:32.985527    6998 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/multinode-312000/config.json ...
	I0520 03:26:32.985539    6998 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/multinode-312000/config.json: {Name:mk1b978ffb46d249b15a27b9f560bc5ec1bf5d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:26:32.985772    6998 start.go:360] acquireMachinesLock for multinode-312000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:26:32.985811    6998 start.go:364] duration metric: took 33.583µs to acquireMachinesLock for "multinode-312000"
	I0520 03:26:32.985824    6998 start.go:93] Provisioning new machine with config: &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:26:32.985858    6998 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:26:32.993245    6998 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:26:33.011302    6998 start.go:159] libmachine.API.Create for "multinode-312000" (driver="qemu2")
	I0520 03:26:33.011343    6998 client.go:168] LocalClient.Create starting
	I0520 03:26:33.011418    6998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:26:33.011448    6998 main.go:141] libmachine: Decoding PEM data...
	I0520 03:26:33.011462    6998 main.go:141] libmachine: Parsing certificate...
	I0520 03:26:33.011525    6998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:26:33.011549    6998 main.go:141] libmachine: Decoding PEM data...
	I0520 03:26:33.011563    6998 main.go:141] libmachine: Parsing certificate...
	I0520 03:26:33.011922    6998 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:26:33.139502    6998 main.go:141] libmachine: Creating SSH key...
	I0520 03:26:33.185810    6998 main.go:141] libmachine: Creating Disk image...
	I0520 03:26:33.185815    6998 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:26:33.185986    6998 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:26:33.198716    6998 main.go:141] libmachine: STDOUT: 
	I0520 03:26:33.198741    6998 main.go:141] libmachine: STDERR: 
	I0520 03:26:33.198799    6998 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2 +20000M
	I0520 03:26:33.209482    6998 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:26:33.209497    6998 main.go:141] libmachine: STDERR: 
	I0520 03:26:33.209509    6998 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:26:33.209513    6998 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:26:33.209551    6998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:f0:e0:ab:50:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:26:33.211240    6998 main.go:141] libmachine: STDOUT: 
	I0520 03:26:33.211255    6998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:26:33.211273    6998 client.go:171] duration metric: took 199.928416ms to LocalClient.Create
	I0520 03:26:35.213425    6998 start.go:128] duration metric: took 2.2275865s to createHost
	I0520 03:26:35.213485    6998 start.go:83] releasing machines lock for "multinode-312000", held for 2.227705333s
	W0520 03:26:35.213587    6998 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:26:35.226822    6998 out.go:177] * Deleting "multinode-312000" in qemu2 ...
	W0520 03:26:35.247818    6998 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:26:35.247847    6998 start.go:728] Will try again in 5 seconds ...
	I0520 03:26:40.249924    6998 start.go:360] acquireMachinesLock for multinode-312000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:26:40.250437    6998 start.go:364] duration metric: took 343.958µs to acquireMachinesLock for "multinode-312000"
	I0520 03:26:40.250595    6998 start.go:93] Provisioning new machine with config: &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:26:40.250889    6998 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:26:40.261303    6998 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:26:40.309944    6998 start.go:159] libmachine.API.Create for "multinode-312000" (driver="qemu2")
	I0520 03:26:40.309999    6998 client.go:168] LocalClient.Create starting
	I0520 03:26:40.310124    6998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:26:40.310186    6998 main.go:141] libmachine: Decoding PEM data...
	I0520 03:26:40.310213    6998 main.go:141] libmachine: Parsing certificate...
	I0520 03:26:40.310280    6998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:26:40.310323    6998 main.go:141] libmachine: Decoding PEM data...
	I0520 03:26:40.310335    6998 main.go:141] libmachine: Parsing certificate...
	I0520 03:26:40.310900    6998 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:26:40.448873    6998 main.go:141] libmachine: Creating SSH key...
	I0520 03:26:40.699506    6998 main.go:141] libmachine: Creating Disk image...
	I0520 03:26:40.699515    6998 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:26:40.699769    6998 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:26:40.713320    6998 main.go:141] libmachine: STDOUT: 
	I0520 03:26:40.713339    6998 main.go:141] libmachine: STDERR: 
	I0520 03:26:40.713389    6998 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2 +20000M
	I0520 03:26:40.724525    6998 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:26:40.724544    6998 main.go:141] libmachine: STDERR: 
	I0520 03:26:40.724554    6998 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:26:40.724559    6998 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:26:40.724586    6998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:6f:70:bf:3c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:26:40.726319    6998 main.go:141] libmachine: STDOUT: 
	I0520 03:26:40.726335    6998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:26:40.726347    6998 client.go:171] duration metric: took 416.349667ms to LocalClient.Create
	I0520 03:26:42.728486    6998 start.go:128] duration metric: took 2.477608416s to createHost
	I0520 03:26:42.728536    6998 start.go:83] releasing machines lock for "multinode-312000", held for 2.478103667s
	W0520 03:26:42.728910    6998 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:26:42.738592    6998 out.go:177] 
	W0520 03:26:42.743631    6998 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:26:42.743771    6998 out.go:239] * 
	* 
	W0520 03:26:42.746394    6998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:26:42.755558    6998 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-312000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (69.567208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (111.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (58.157ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-312000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- rollout status deployment/busybox: exit status 1 (55.5825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.947625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.980791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.930458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.7615ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.611291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.218125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.913375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.284125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.258042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.994ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.306125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.093208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.760833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.482542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.456125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (29.4975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (111.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-312000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.226ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.66925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-312000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-312000 -v 3 --alsologtostderr: exit status 83 (39.596125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-312000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:33.976785    7086 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:33.976947    7086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:33.976950    7086 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:33.976952    7086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:33.977077    7086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:33.977306    7086 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:33.977492    7086 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:33.982402    7086 out.go:177] * The control-plane node multinode-312000 host is not running: state=Stopped
	I0520 03:28:33.985243    7086 out.go:177]   To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-312000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (29.024291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-312000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-312000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.341833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-312000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-312000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-312000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (29.142958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-312000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-312000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-312000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"multinode-312000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (28.821125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --output json --alsologtostderr: exit status 7 (29.286958ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-312000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:34.204841    7099 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:34.204991    7099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.204994    7099 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:34.204999    7099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.205126    7099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:34.205236    7099 out.go:298] Setting JSON to true
	I0520 03:28:34.205248    7099 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:34.205296    7099 notify.go:220] Checking for updates...
	I0520 03:28:34.205456    7099 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:34.205462    7099 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:34.205679    7099 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:34.205682    7099 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:34.205685    7099 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-312000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (29.237875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 node stop m03: exit status 85 (46.548292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-312000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status: exit status 7 (29.623166ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr: exit status 7 (29.198375ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:34.340276    7107 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:34.340416    7107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.340419    7107 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:34.340421    7107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.340539    7107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:34.340665    7107 out.go:298] Setting JSON to false
	I0520 03:28:34.340674    7107 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:34.340727    7107 notify.go:220] Checking for updates...
	I0520 03:28:34.340879    7107 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:34.340886    7107 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:34.341093    7107 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:34.341097    7107 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:34.341099    7107 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr": multinode-312000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (29.110541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.513041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:34.398582    7111 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:34.399027    7111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.399030    7111 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:34.399033    7111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.399215    7111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:34.399420    7111 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:34.399602    7111 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:34.404052    7111 out.go:177] 
	W0520 03:28:34.408022    7111 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0520 03:28:34.408027    7111 out.go:239] * 
	* 
	W0520 03:28:34.409885    7111 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:28:34.413959    7111 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0520 03:28:34.398582    7111 out.go:291] Setting OutFile to fd 1 ...
I0520 03:28:34.399027    7111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:28:34.399030    7111 out.go:304] Setting ErrFile to fd 2...
I0520 03:28:34.399033    7111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 03:28:34.399215    7111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
I0520 03:28:34.399420    7111 mustload.go:65] Loading cluster: multinode-312000
I0520 03:28:34.399602    7111 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 03:28:34.404052    7111 out.go:177] 
W0520 03:28:34.408022    7111 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0520 03:28:34.408027    7111 out.go:239] * 
* 
W0520 03:28:34.409885    7111 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0520 03:28:34.413959    7111 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-312000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (29.262667ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:34.446654    7113 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:34.446791    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.446795    7113 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:34.446797    7113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:34.446918    7113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:34.447027    7113 out.go:298] Setting JSON to false
	I0520 03:28:34.447040    7113 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:34.447099    7113 notify.go:220] Checking for updates...
	I0520 03:28:34.447233    7113 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:34.447240    7113 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:34.447439    7113 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:34.447443    7113 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:34.447445    7113 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (73.608166ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:35.362165    7115 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:35.362373    7115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:35.362377    7115 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:35.362380    7115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:35.362573    7115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:35.362733    7115 out.go:298] Setting JSON to false
	I0520 03:28:35.362744    7115 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:35.362788    7115 notify.go:220] Checking for updates...
	I0520 03:28:35.362997    7115 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:35.363006    7115 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:35.363290    7115 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:35.363295    7115 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:35.363298    7115 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (72.1865ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:37.478961    7117 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:37.479157    7117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:37.479161    7117 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:37.479164    7117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:37.479325    7117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:37.479497    7117 out.go:298] Setting JSON to false
	I0520 03:28:37.479508    7117 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:37.479542    7117 notify.go:220] Checking for updates...
	I0520 03:28:37.479776    7117 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:37.479785    7117 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:37.480067    7117 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:37.480073    7117 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:37.480076    7117 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (72.669625ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:39.311550    7119 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:39.311749    7119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:39.311754    7119 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:39.311757    7119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:39.311935    7119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:39.312118    7119 out.go:298] Setting JSON to false
	I0520 03:28:39.312135    7119 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:39.312170    7119 notify.go:220] Checking for updates...
	I0520 03:28:39.312414    7119 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:39.312422    7119 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:39.312708    7119 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:39.312712    7119 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:39.312715    7119 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (74.02225ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:44.363330    7121 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:44.363547    7121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:44.363551    7121 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:44.363555    7121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:44.363727    7121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:44.363883    7121 out.go:298] Setting JSON to false
	I0520 03:28:44.363895    7121 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:44.363935    7121 notify.go:220] Checking for updates...
	I0520 03:28:44.364192    7121 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:44.364200    7121 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:44.364465    7121 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:44.364470    7121 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:44.364474    7121 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (68.945083ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:51.814060    7123 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:51.814219    7123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:51.814223    7123 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:51.814226    7123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:51.814389    7123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:51.814550    7123 out.go:298] Setting JSON to false
	I0520 03:28:51.814562    7123 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:51.814592    7123 notify.go:220] Checking for updates...
	I0520 03:28:51.814824    7123 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:51.814833    7123 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:51.815120    7123 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:51.815125    7123 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:51.815128    7123 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (71.956083ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:28:57.438869    7126 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:28:57.439076    7126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:57.439080    7126 out.go:304] Setting ErrFile to fd 2...
	I0520 03:28:57.439083    7126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:28:57.439244    7126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:28:57.439407    7126 out.go:298] Setting JSON to false
	I0520 03:28:57.439418    7126 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:28:57.439452    7126 notify.go:220] Checking for updates...
	I0520 03:28:57.439678    7126 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:28:57.439691    7126 status.go:255] checking status of multinode-312000 ...
	I0520 03:28:57.439998    7126 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:28:57.440003    7126 status.go:343] host is not running, skipping remaining checks
	I0520 03:28:57.440006    7126 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (71.811708ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:29:05.079278    7129 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:29:05.079476    7129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:05.079480    7129 out.go:304] Setting ErrFile to fd 2...
	I0520 03:29:05.079483    7129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:05.079678    7129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:29:05.079878    7129 out.go:298] Setting JSON to false
	I0520 03:29:05.079891    7129 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:29:05.079932    7129 notify.go:220] Checking for updates...
	I0520 03:29:05.080140    7129 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:29:05.080149    7129 status.go:255] checking status of multinode-312000 ...
	I0520 03:29:05.080437    7129 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:29:05.080442    7129 status.go:343] host is not running, skipping remaining checks
	I0520 03:29:05.080445    7129 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr: exit status 7 (73.330292ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:29:18.631821    7131 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:29:18.632034    7131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:18.632038    7131 out.go:304] Setting ErrFile to fd 2...
	I0520 03:29:18.632041    7131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:18.632227    7131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:29:18.632417    7131 out.go:298] Setting JSON to false
	I0520 03:29:18.632434    7131 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:29:18.632471    7131 notify.go:220] Checking for updates...
	I0520 03:29:18.632678    7131 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:29:18.632686    7131 status.go:255] checking status of multinode-312000 ...
	I0520 03:29:18.633004    7131 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:29:18.633009    7131 status.go:343] host is not running, skipping remaining checks
	I0520 03:29:18.633013    7131 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-312000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (32.348625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (44.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-312000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-312000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-312000: (3.617457917s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219141875s)

                                                
                                                
-- stdout --
	* [multinode-312000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-312000" primary control-plane node in "multinode-312000" cluster
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:29:22.376427    7159 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:29:22.376614    7159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:22.376618    7159 out.go:304] Setting ErrFile to fd 2...
	I0520 03:29:22.376622    7159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:22.376772    7159 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:29:22.377892    7159 out.go:298] Setting JSON to false
	I0520 03:29:22.396778    7159 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5333,"bootTime":1716195629,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:29:22.396842    7159 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:29:22.400917    7159 out.go:177] * [multinode-312000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:29:22.408899    7159 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:29:22.408917    7159 notify.go:220] Checking for updates...
	I0520 03:29:22.415805    7159 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:29:22.418855    7159 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:29:22.421863    7159 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:29:22.424874    7159 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:29:22.427843    7159 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:29:22.431087    7159 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:29:22.431134    7159 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:29:22.435812    7159 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:29:22.442793    7159 start.go:297] selected driver: qemu2
	I0520 03:29:22.442801    7159 start.go:901] validating driver "qemu2" against &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:29:22.442849    7159 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:29:22.445232    7159 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:29:22.445258    7159 cni.go:84] Creating CNI manager for ""
	I0520 03:29:22.445263    7159 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 03:29:22.445307    7159 start.go:340] cluster config:
	{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-312000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:29:22.449871    7159 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:22.456853    7159 out.go:177] * Starting "multinode-312000" primary control-plane node in "multinode-312000" cluster
	I0520 03:29:22.460835    7159 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:29:22.460851    7159 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:29:22.460866    7159 cache.go:56] Caching tarball of preloaded images
	I0520 03:29:22.460929    7159 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:29:22.460936    7159 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:29:22.461005    7159 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/multinode-312000/config.json ...
	I0520 03:29:22.461427    7159 start.go:360] acquireMachinesLock for multinode-312000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:29:22.461468    7159 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "multinode-312000"
	I0520 03:29:22.461479    7159 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:29:22.461484    7159 fix.go:54] fixHost starting: 
	I0520 03:29:22.461612    7159 fix.go:112] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0520 03:29:22.461621    7159 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:29:22.465849    7159 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0520 03:29:22.469865    7159 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:6f:70:bf:3c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:29:22.471909    7159 main.go:141] libmachine: STDOUT: 
	I0520 03:29:22.471934    7159 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:29:22.471965    7159 fix.go:56] duration metric: took 10.481583ms for fixHost
	I0520 03:29:22.471970    7159 start.go:83] releasing machines lock for "multinode-312000", held for 10.496834ms
	W0520 03:29:22.471977    7159 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:29:22.472010    7159 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:29:22.472015    7159 start.go:728] Will try again in 5 seconds ...
	I0520 03:29:27.472984    7159 start.go:360] acquireMachinesLock for multinode-312000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:29:27.473388    7159 start.go:364] duration metric: took 307.416µs to acquireMachinesLock for "multinode-312000"
	I0520 03:29:27.473497    7159 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:29:27.473520    7159 fix.go:54] fixHost starting: 
	I0520 03:29:27.474272    7159 fix.go:112] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0520 03:29:27.474296    7159 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:29:27.478972    7159 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0520 03:29:27.487943    7159 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:6f:70:bf:3c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:29:27.497112    7159 main.go:141] libmachine: STDOUT: 
	I0520 03:29:27.497207    7159 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:29:27.497329    7159 fix.go:56] duration metric: took 23.808875ms for fixHost
	I0520 03:29:27.497350    7159 start.go:83] releasing machines lock for "multinode-312000", held for 23.936458ms
	W0520 03:29:27.497592    7159 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:29:27.504972    7159 out.go:177] 
	W0520 03:29:27.508087    7159 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:29:27.508115    7159 out.go:239] * 
	* 
	W0520 03:29:27.510868    7159 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:29:27.518910    7159 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-312000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-312000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (32.069917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 node delete m03: exit status 83 (44.204708ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-312000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-312000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr: exit status 7 (28.343667ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:29:27.702793    7175 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:29:27.702983    7175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:27.702987    7175 out.go:304] Setting ErrFile to fd 2...
	I0520 03:29:27.702989    7175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:27.703111    7175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:29:27.703236    7175 out.go:298] Setting JSON to false
	I0520 03:29:27.703245    7175 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:29:27.703316    7175 notify.go:220] Checking for updates...
	I0520 03:29:27.703450    7175 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:29:27.703457    7175 status.go:255] checking status of multinode-312000 ...
	I0520 03:29:27.703680    7175 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:29:27.703684    7175 status.go:343] host is not running, skipping remaining checks
	I0520 03:29:27.703686    7175 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (27.929417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-312000 stop: (3.074752708s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status: exit status 7 (61.252917ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr: exit status 7 (31.34725ms)

                                                
                                                
-- stdout --
	multinode-312000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:29:30.898701    7199 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:29:30.898897    7199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:30.898900    7199 out.go:304] Setting ErrFile to fd 2...
	I0520 03:29:30.898902    7199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:30.899020    7199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:29:30.899139    7199 out.go:298] Setting JSON to false
	I0520 03:29:30.899149    7199 mustload.go:65] Loading cluster: multinode-312000
	I0520 03:29:30.899209    7199 notify.go:220] Checking for updates...
	I0520 03:29:30.899352    7199 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:29:30.899358    7199 status.go:255] checking status of multinode-312000 ...
	I0520 03:29:30.899554    7199 status.go:330] multinode-312000 host status = "Stopped" (err=<nil>)
	I0520 03:29:30.899557    7199 status.go:343] host is not running, skipping remaining checks
	I0520 03:29:30.899559    7199 status.go:257] multinode-312000 status: &{Name:multinode-312000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr": multinode-312000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-312000 status --alsologtostderr": multinode-312000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (29.006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179170667s)

                                                
                                                
-- stdout --
	* [multinode-312000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-312000" primary control-plane node in "multinode-312000" cluster
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-312000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:29:30.956694    7203 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:29:30.956846    7203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:30.956849    7203 out.go:304] Setting ErrFile to fd 2...
	I0520 03:29:30.956851    7203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:30.956997    7203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:29:30.957965    7203 out.go:298] Setting JSON to false
	I0520 03:29:30.974025    7203 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5341,"bootTime":1716195629,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:29:30.974141    7203 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:29:30.978887    7203 out.go:177] * [multinode-312000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:29:30.985770    7203 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:29:30.985837    7203 notify.go:220] Checking for updates...
	I0520 03:29:30.989798    7203 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:29:30.993873    7203 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:29:30.996774    7203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:29:31.000765    7203 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:29:31.003773    7203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:29:31.007082    7203 config.go:182] Loaded profile config "multinode-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:29:31.007340    7203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:29:31.011782    7203 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:29:31.018739    7203 start.go:297] selected driver: qemu2
	I0520 03:29:31.018745    7203 start.go:901] validating driver "qemu2" against &{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:29:31.018794    7203 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:29:31.021178    7203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:29:31.021201    7203 cni.go:84] Creating CNI manager for ""
	I0520 03:29:31.021207    7203 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 03:29:31.021270    7203 start.go:340] cluster config:
	{Name:multinode-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-312000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:29:31.025660    7203 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:31.032783    7203 out.go:177] * Starting "multinode-312000" primary control-plane node in "multinode-312000" cluster
	I0520 03:29:31.036748    7203 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:29:31.036765    7203 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:29:31.036776    7203 cache.go:56] Caching tarball of preloaded images
	I0520 03:29:31.036833    7203 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:29:31.036839    7203 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:29:31.036898    7203 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/multinode-312000/config.json ...
	I0520 03:29:31.037307    7203 start.go:360] acquireMachinesLock for multinode-312000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:29:31.037338    7203 start.go:364] duration metric: took 21.791µs to acquireMachinesLock for "multinode-312000"
	I0520 03:29:31.037348    7203 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:29:31.037355    7203 fix.go:54] fixHost starting: 
	I0520 03:29:31.037471    7203 fix.go:112] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0520 03:29:31.037478    7203 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:29:31.041642    7203 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0520 03:29:31.049791    7203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:6f:70:bf:3c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:29:31.051790    7203 main.go:141] libmachine: STDOUT: 
	I0520 03:29:31.051811    7203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:29:31.051839    7203 fix.go:56] duration metric: took 14.484542ms for fixHost
	I0520 03:29:31.051843    7203 start.go:83] releasing machines lock for "multinode-312000", held for 14.501458ms
	W0520 03:29:31.051849    7203 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:29:31.051885    7203 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:29:31.051890    7203 start.go:728] Will try again in 5 seconds ...
	I0520 03:29:36.053901    7203 start.go:360] acquireMachinesLock for multinode-312000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:29:36.054258    7203 start.go:364] duration metric: took 280.959µs to acquireMachinesLock for "multinode-312000"
	I0520 03:29:36.054404    7203 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:29:36.054446    7203 fix.go:54] fixHost starting: 
	I0520 03:29:36.055283    7203 fix.go:112] recreateIfNeeded on multinode-312000: state=Stopped err=<nil>
	W0520 03:29:36.055313    7203 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:29:36.059748    7203 out.go:177] * Restarting existing qemu2 VM for "multinode-312000" ...
	I0520 03:29:36.063824    7203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:6f:70:bf:3c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/multinode-312000/disk.qcow2
	I0520 03:29:36.072986    7203 main.go:141] libmachine: STDOUT: 
	I0520 03:29:36.073074    7203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:29:36.073144    7203 fix.go:56] duration metric: took 18.699458ms for fixHost
	I0520 03:29:36.073167    7203 start.go:83] releasing machines lock for "multinode-312000", held for 18.885ms
	W0520 03:29:36.073355    7203 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:29:36.080674    7203 out.go:177] 
	W0520 03:29:36.084598    7203 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:29:36.084641    7203 out.go:239] * 
	* 
	W0520 03:29:36.087235    7203 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:29:36.095719    7203 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-312000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (70.132167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-312000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000-m01 --driver=qemu2 : exit status 80 (9.869711959s)

                                                
                                                
-- stdout --
	* [multinode-312000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-312000-m01" primary control-plane node in "multinode-312000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-312000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-312000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-312000-m02 --driver=qemu2 : exit status 80 (10.019127208s)

                                                
                                                
-- stdout --
	* [multinode-312000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-312000-m02" primary control-plane node in "multinode-312000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-312000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-312000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-312000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-312000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-312000: exit status 83 (80.2105ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-312000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-312000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-312000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-312000 -n multinode-312000: exit status 7 (29.822833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (10.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-321000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-321000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.005796s)

                                                
                                                
-- stdout --
	* [test-preload-321000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-321000" primary control-plane node in "test-preload-321000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-321000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:29:56.463562    7259 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:29:56.463702    7259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:56.463705    7259 out.go:304] Setting ErrFile to fd 2...
	I0520 03:29:56.463707    7259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:29:56.463835    7259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:29:56.464925    7259 out.go:298] Setting JSON to false
	I0520 03:29:56.480840    7259 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5367,"bootTime":1716195629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:29:56.480899    7259 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:29:56.486271    7259 out.go:177] * [test-preload-321000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:29:56.493317    7259 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:29:56.497226    7259 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:29:56.493374    7259 notify.go:220] Checking for updates...
	I0520 03:29:56.503214    7259 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:29:56.506189    7259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:29:56.509225    7259 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:29:56.512242    7259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:29:56.515569    7259 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:29:56.515635    7259 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:29:56.520261    7259 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:29:56.526246    7259 start.go:297] selected driver: qemu2
	I0520 03:29:56.526252    7259 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:29:56.526267    7259 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:29:56.528477    7259 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:29:56.531230    7259 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:29:56.534338    7259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:29:56.534368    7259 cni.go:84] Creating CNI manager for ""
	I0520 03:29:56.534375    7259 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:29:56.534379    7259 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:29:56.534413    7259 start.go:340] cluster config:
	{Name:test-preload-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:29:56.538610    7259 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.545203    7259 out.go:177] * Starting "test-preload-321000" primary control-plane node in "test-preload-321000" cluster
	I0520 03:29:56.549255    7259 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0520 03:29:56.549351    7259 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/test-preload-321000/config.json ...
	I0520 03:29:56.549377    7259 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/test-preload-321000/config.json: {Name:mk64aa2dc15cf7c90a3ad713447f1c570d483232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:29:56.549383    7259 cache.go:107] acquiring lock: {Name:mk66345377435c370bdd94262cb2f18321c8806b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549385    7259 cache.go:107] acquiring lock: {Name:mk885b91a88faccd5c3b27db5782ba47dca16b6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549397    7259 cache.go:107] acquiring lock: {Name:mk6385d84c5b43c0f08910059df45aa42013064e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549577    7259 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 03:29:56.549591    7259 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:29:56.549593    7259 cache.go:107] acquiring lock: {Name:mk4060d7743329f9b9a162e725964882a859ebe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549639    7259 start.go:360] acquireMachinesLock for test-preload-321000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:29:56.549613    7259 cache.go:107] acquiring lock: {Name:mk0466aa060eeeb33d5dd838767bd4b7f1b88ad1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549657    7259 cache.go:107] acquiring lock: {Name:mk9876e1f5fe7f255d8f4b769fb7692c830bfa3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549695    7259 cache.go:107] acquiring lock: {Name:mkd1515f793e54d6bcf31ea69453e5c546d0dc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549718    7259 cache.go:107] acquiring lock: {Name:mkd068277c406cfe78dfddfa152a35645fa85976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:29:56.549729    7259 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:29:56.549764    7259 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 03:29:56.549672    7259 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "test-preload-321000"
	I0520 03:29:56.549580    7259 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 03:29:56.549868    7259 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 03:29:56.549881    7259 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:29:56.549892    7259 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 03:29:56.549895    7259 start.go:93] Provisioning new machine with config: &{Name:test-preload-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:29:56.549940    7259 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:29:56.557198    7259 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:29:56.560115    7259 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:29:56.560181    7259 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:29:56.560204    7259 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:29:56.560228    7259 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0520 03:29:56.560299    7259 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0520 03:29:56.560349    7259 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0520 03:29:56.563535    7259 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 03:29:56.564054    7259 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0520 03:29:56.574636    7259 start.go:159] libmachine.API.Create for "test-preload-321000" (driver="qemu2")
	I0520 03:29:56.574659    7259 client.go:168] LocalClient.Create starting
	I0520 03:29:56.574748    7259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:29:56.574778    7259 main.go:141] libmachine: Decoding PEM data...
	I0520 03:29:56.574793    7259 main.go:141] libmachine: Parsing certificate...
	I0520 03:29:56.574830    7259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:29:56.574852    7259 main.go:141] libmachine: Decoding PEM data...
	I0520 03:29:56.574859    7259 main.go:141] libmachine: Parsing certificate...
	I0520 03:29:56.575251    7259 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:29:56.708808    7259 main.go:141] libmachine: Creating SSH key...
	I0520 03:29:56.823968    7259 main.go:141] libmachine: Creating Disk image...
	I0520 03:29:56.823989    7259 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:29:56.824198    7259 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2
	I0520 03:29:56.837432    7259 main.go:141] libmachine: STDOUT: 
	I0520 03:29:56.837456    7259 main.go:141] libmachine: STDERR: 
	I0520 03:29:56.837519    7259 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2 +20000M
	I0520 03:29:56.849652    7259 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:29:56.849677    7259 main.go:141] libmachine: STDERR: 
	I0520 03:29:56.849698    7259 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2
	I0520 03:29:56.849701    7259 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:29:56.849744    7259 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:7a:63:4f:f8:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2
	I0520 03:29:56.851824    7259 main.go:141] libmachine: STDOUT: 
	I0520 03:29:56.851842    7259 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:29:56.851858    7259 client.go:171] duration metric: took 277.199958ms to LocalClient.Create
	I0520 03:29:57.160240    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0520 03:29:57.164536    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0520 03:29:57.167463    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0520 03:29:57.172769    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0520 03:29:57.173925    7259 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 03:29:57.173997    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 03:29:57.196267    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0520 03:29:57.199095    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 03:29:57.290443    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0520 03:29:57.290496    7259 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 740.861583ms
	I0520 03:29:57.290546    7259 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0520 03:29:57.359788    7259 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 03:29:57.359868    7259 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 03:29:57.852863    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 03:29:57.852916    7259 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.303556042s
	I0520 03:29:57.852941    7259 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 03:29:58.715309    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0520 03:29:58.715365    7259 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.165712s
	I0520 03:29:58.715416    7259 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0520 03:29:58.852146    7259 start.go:128] duration metric: took 2.302227167s to createHost
	I0520 03:29:58.852187    7259 start.go:83] releasing machines lock for "test-preload-321000", held for 2.302425167s
	W0520 03:29:58.852238    7259 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:29:58.861961    7259 out.go:177] * Deleting "test-preload-321000" in qemu2 ...
	W0520 03:29:58.887240    7259 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:29:58.887278    7259 start.go:728] Will try again in 5 seconds ...
	I0520 03:30:00.056850    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0520 03:30:00.056914    7259 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.5073565s
	I0520 03:30:00.056947    7259 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0520 03:30:00.601176    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0520 03:30:00.601231    7259 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.05192s
	I0520 03:30:00.601277    7259 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0520 03:30:01.182647    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0520 03:30:01.182683    7259 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.633218584s
	I0520 03:30:01.182699    7259 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0520 03:30:02.884205    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0520 03:30:02.884259    7259 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.334978792s
	I0520 03:30:02.884283    7259 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0520 03:30:03.887569    7259 start.go:360] acquireMachinesLock for test-preload-321000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:30:03.888100    7259 start.go:364] duration metric: took 429.875µs to acquireMachinesLock for "test-preload-321000"
	I0520 03:30:03.888242    7259 start.go:93] Provisioning new machine with config: &{Name:test-preload-321000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-321000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:30:03.888445    7259 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:30:03.898038    7259 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:30:03.946312    7259 start.go:159] libmachine.API.Create for "test-preload-321000" (driver="qemu2")
	I0520 03:30:03.946353    7259 client.go:168] LocalClient.Create starting
	I0520 03:30:03.946463    7259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:30:03.946533    7259 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:03.946556    7259 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:03.946632    7259 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:30:03.946675    7259 main.go:141] libmachine: Decoding PEM data...
	I0520 03:30:03.946688    7259 main.go:141] libmachine: Parsing certificate...
	I0520 03:30:03.947220    7259 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:30:04.090191    7259 main.go:141] libmachine: Creating SSH key...
	I0520 03:30:04.367628    7259 main.go:141] libmachine: Creating Disk image...
	I0520 03:30:04.367638    7259 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:30:04.367957    7259 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2
	I0520 03:30:04.381508    7259 main.go:141] libmachine: STDOUT: 
	I0520 03:30:04.381537    7259 main.go:141] libmachine: STDERR: 
	I0520 03:30:04.381591    7259 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2 +20000M
	I0520 03:30:04.392914    7259 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:30:04.392936    7259 main.go:141] libmachine: STDERR: 
	I0520 03:30:04.392952    7259 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2
	I0520 03:30:04.392955    7259 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:30:04.392994    7259 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:51:36:93:1e:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/test-preload-321000/disk.qcow2
	I0520 03:30:04.394750    7259 main.go:141] libmachine: STDOUT: 
	I0520 03:30:04.394775    7259 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:30:04.394790    7259 client.go:171] duration metric: took 448.440333ms to LocalClient.Create
	I0520 03:30:05.528446    7259 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0520 03:30:05.528528    7259 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.979094584s
	I0520 03:30:05.528569    7259 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0520 03:30:05.528623    7259 cache.go:87] Successfully saved all images to host disk.
	I0520 03:30:06.396958    7259 start.go:128] duration metric: took 2.508527833s to createHost
	I0520 03:30:06.397027    7259 start.go:83] releasing machines lock for "test-preload-321000", held for 2.508946875s
	W0520 03:30:06.397347    7259 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-321000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-321000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:30:06.410720    7259 out.go:177] 
	W0520 03:30:06.414946    7259 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:30:06.414984    7259 out.go:239] * 
	* 
	W0520 03:30:06.417787    7259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:30:06.427847    7259 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-321000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-05-20 03:30:06.44492 -0700 PDT m=+665.961243251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-321000 -n test-preload-321000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-321000 -n test-preload-321000: exit status 7 (65.188333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-321000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-321000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-321000
--- FAIL: TestPreload (10.17s)

                                                
                                    
x
+
TestScheduledStopUnix (9.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-200000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-200000 --memory=2048 --driver=qemu2 : exit status 80 (9.720435792s)

                                                
                                                
-- stdout --
	* [scheduled-stop-200000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-200000" primary control-plane node in "scheduled-stop-200000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-200000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-200000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-200000" primary control-plane node in "scheduled-stop-200000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-200000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-05-20 03:30:16.329152 -0700 PDT m=+675.845659376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-200000 -n scheduled-stop-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-200000 -n scheduled-stop-200000: exit status 7 (68.020958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-200000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-200000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-200000
--- FAIL: TestScheduledStopUnix (9.89s)

                                                
                                    
x
+
TestSkaffold (12.41s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3732315001 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-237000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-237000 --memory=2600 --driver=qemu2 : exit status 80 (9.874626375s)

                                                
                                                
-- stdout --
	* [skaffold-237000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-237000" primary control-plane node in "skaffold-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-237000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-237000" primary control-plane node in "skaffold-237000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-237000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-237000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-05-20 03:30:28.741648 -0700 PDT m=+688.258385376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-237000 -n skaffold-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-237000 -n skaffold-237000: exit status 7 (63.256333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-237000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-237000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-237000
--- FAIL: TestSkaffold (12.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1993648861 start -p running-upgrade-908000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1993648861 start -p running-upgrade-908000 --memory=2200 --vm-driver=qemu2 : (51.297076584s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.757169041s)

                                                
                                                
-- stdout --
	* [running-upgrade-908000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-908000" primary control-plane node in "running-upgrade-908000" cluster
	* Updating the running qemu2 "running-upgrade-908000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:32:01.815320    7666 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:32:01.815475    7666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:32:01.815478    7666 out.go:304] Setting ErrFile to fd 2...
	I0520 03:32:01.815480    7666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:32:01.815618    7666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:32:01.816653    7666 out.go:298] Setting JSON to false
	I0520 03:32:01.833159    7666 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5492,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:32:01.833224    7666 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:32:01.838774    7666 out.go:177] * [running-upgrade-908000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:32:01.846754    7666 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:32:01.846855    7666 notify.go:220] Checking for updates...
	I0520 03:32:01.854786    7666 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:32:01.858747    7666 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:32:01.861836    7666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:32:01.864785    7666 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:32:01.867724    7666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:32:01.871098    7666 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:32:01.874784    7666 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 03:32:01.877743    7666 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:32:01.881775    7666 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:32:01.888824    7666 start.go:297] selected driver: qemu2
	I0520 03:32:01.888830    7666 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51080 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:32:01.888888    7666 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:32:01.891119    7666 cni.go:84] Creating CNI manager for ""
	I0520 03:32:01.891140    7666 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:32:01.891165    7666 start.go:340] cluster config:
	{Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51080 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:32:01.891220    7666 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:32:01.898614    7666 out.go:177] * Starting "running-upgrade-908000" primary control-plane node in "running-upgrade-908000" cluster
	I0520 03:32:01.902779    7666 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 03:32:01.902791    7666 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 03:32:01.902798    7666 cache.go:56] Caching tarball of preloaded images
	I0520 03:32:01.902843    7666 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:32:01.902848    7666 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 03:32:01.902888    7666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/config.json ...
	I0520 03:32:01.903284    7666 start.go:360] acquireMachinesLock for running-upgrade-908000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:32:01.903320    7666 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "running-upgrade-908000"
	I0520 03:32:01.903328    7666 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:32:01.903333    7666 fix.go:54] fixHost starting: 
	I0520 03:32:01.904016    7666 fix.go:112] recreateIfNeeded on running-upgrade-908000: state=Running err=<nil>
	W0520 03:32:01.904026    7666 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:32:01.911794    7666 out.go:177] * Updating the running qemu2 "running-upgrade-908000" VM ...
	I0520 03:32:01.915778    7666 machine.go:94] provisionDockerMachine start ...
	I0520 03:32:01.915808    7666 main.go:141] libmachine: Using SSH client type: native
	I0520 03:32:01.915906    7666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101372900] 0x101375160 <nil>  [] 0s} localhost 51048 <nil> <nil>}
	I0520 03:32:01.915911    7666 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 03:32:01.981479    7666 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-908000
	
	I0520 03:32:01.981492    7666 buildroot.go:166] provisioning hostname "running-upgrade-908000"
	I0520 03:32:01.981542    7666 main.go:141] libmachine: Using SSH client type: native
	I0520 03:32:01.981659    7666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101372900] 0x101375160 <nil>  [] 0s} localhost 51048 <nil> <nil>}
	I0520 03:32:01.981664    7666 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-908000 && echo "running-upgrade-908000" | sudo tee /etc/hostname
	I0520 03:32:02.049688    7666 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-908000
	
	I0520 03:32:02.049724    7666 main.go:141] libmachine: Using SSH client type: native
	I0520 03:32:02.049822    7666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101372900] 0x101375160 <nil>  [] 0s} localhost 51048 <nil> <nil>}
	I0520 03:32:02.049833    7666 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-908000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-908000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-908000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 03:32:02.114875    7666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:32:02.114885    7666 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18925-5286/.minikube CaCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18925-5286/.minikube}
	I0520 03:32:02.114892    7666 buildroot.go:174] setting up certificates
	I0520 03:32:02.114896    7666 provision.go:84] configureAuth start
	I0520 03:32:02.114934    7666 provision.go:143] copyHostCerts
	I0520 03:32:02.114991    7666 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem, removing ...
	I0520 03:32:02.114998    7666 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem
	I0520 03:32:02.115117    7666 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem (1078 bytes)
	I0520 03:32:02.115316    7666 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem, removing ...
	I0520 03:32:02.115319    7666 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem
	I0520 03:32:02.115363    7666 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem (1123 bytes)
	I0520 03:32:02.115486    7666 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem, removing ...
	I0520 03:32:02.115489    7666 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem
	I0520 03:32:02.115528    7666 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem (1675 bytes)
	I0520 03:32:02.115620    7666 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-908000 san=[127.0.0.1 localhost minikube running-upgrade-908000]
	I0520 03:32:02.250424    7666 provision.go:177] copyRemoteCerts
	I0520 03:32:02.250469    7666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 03:32:02.250478    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0520 03:32:02.284532    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 03:32:02.291645    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 03:32:02.298221    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 03:32:02.305075    7666 provision.go:87] duration metric: took 190.147666ms to configureAuth
	I0520 03:32:02.305083    7666 buildroot.go:189] setting minikube options for container-runtime
	I0520 03:32:02.305191    7666 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:32:02.305248    7666 main.go:141] libmachine: Using SSH client type: native
	I0520 03:32:02.305336    7666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101372900] 0x101375160 <nil>  [] 0s} localhost 51048 <nil> <nil>}
	I0520 03:32:02.305341    7666 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 03:32:02.369854    7666 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 03:32:02.369861    7666 buildroot.go:70] root file system type: tmpfs
	I0520 03:32:02.369912    7666 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 03:32:02.369963    7666 main.go:141] libmachine: Using SSH client type: native
	I0520 03:32:02.370060    7666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101372900] 0x101375160 <nil>  [] 0s} localhost 51048 <nil> <nil>}
	I0520 03:32:02.370092    7666 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 03:32:02.438920    7666 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 03:32:02.438977    7666 main.go:141] libmachine: Using SSH client type: native
	I0520 03:32:02.439092    7666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101372900] 0x101375160 <nil>  [] 0s} localhost 51048 <nil> <nil>}
	I0520 03:32:02.439100    7666 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 03:32:02.503544    7666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:32:02.503558    7666 machine.go:97] duration metric: took 587.785083ms to provisionDockerMachine
	I0520 03:32:02.503565    7666 start.go:293] postStartSetup for "running-upgrade-908000" (driver="qemu2")
	I0520 03:32:02.503573    7666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 03:32:02.503634    7666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 03:32:02.503646    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0520 03:32:02.538295    7666 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 03:32:02.539606    7666 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 03:32:02.539612    7666 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18925-5286/.minikube/addons for local assets ...
	I0520 03:32:02.539671    7666 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18925-5286/.minikube/files for local assets ...
	I0520 03:32:02.539784    7666 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem -> 58182.pem in /etc/ssl/certs
	I0520 03:32:02.539877    7666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 03:32:02.542347    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem --> /etc/ssl/certs/58182.pem (1708 bytes)
	I0520 03:32:02.549451    7666 start.go:296] duration metric: took 45.881625ms for postStartSetup
	I0520 03:32:02.549466    7666 fix.go:56] duration metric: took 646.145167ms for fixHost
	I0520 03:32:02.549494    7666 main.go:141] libmachine: Using SSH client type: native
	I0520 03:32:02.549595    7666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101372900] 0x101375160 <nil>  [] 0s} localhost 51048 <nil> <nil>}
	I0520 03:32:02.549603    7666 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 03:32:02.614289    7666 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201122.214285055
	
	I0520 03:32:02.614299    7666 fix.go:216] guest clock: 1716201122.214285055
	I0520 03:32:02.614303    7666 fix.go:229] Guest: 2024-05-20 03:32:02.214285055 -0700 PDT Remote: 2024-05-20 03:32:02.549468 -0700 PDT m=+0.753326376 (delta=-335.182945ms)
	I0520 03:32:02.614314    7666 fix.go:200] guest clock delta is within tolerance: -335.182945ms
	I0520 03:32:02.614317    7666 start.go:83] releasing machines lock for "running-upgrade-908000", held for 711.005958ms
	I0520 03:32:02.614380    7666 ssh_runner.go:195] Run: cat /version.json
	I0520 03:32:02.614391    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0520 03:32:02.614380    7666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 03:32:02.614423    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	W0520 03:32:02.614942    7666 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51048: connect: connection refused
	I0520 03:32:02.614964    7666 retry.go:31] will retry after 267.304893ms: dial tcp [::1]:51048: connect: connection refused
	W0520 03:32:02.647815    7666 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 03:32:02.647861    7666 ssh_runner.go:195] Run: systemctl --version
	I0520 03:32:02.649533    7666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 03:32:02.651169    7666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 03:32:02.651191    7666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 03:32:02.654077    7666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 03:32:02.658800    7666 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 03:32:02.658807    7666 start.go:494] detecting cgroup driver to use...
	I0520 03:32:02.658913    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:32:02.663876    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 03:32:02.666724    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 03:32:02.669683    7666 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 03:32:02.669710    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 03:32:02.672589    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:32:02.675530    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 03:32:02.678352    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:32:02.681193    7666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 03:32:02.684436    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 03:32:02.687245    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 03:32:02.690071    7666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 03:32:02.693605    7666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 03:32:02.696461    7666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 03:32:02.698966    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:32:02.797617    7666 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 03:32:02.808806    7666 start.go:494] detecting cgroup driver to use...
	I0520 03:32:02.808877    7666 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 03:32:02.813955    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:32:02.818834    7666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 03:32:02.824663    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:32:02.829194    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:32:02.833439    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:32:02.838832    7666 ssh_runner.go:195] Run: which cri-dockerd
	I0520 03:32:02.840127    7666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 03:32:02.842660    7666 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 03:32:02.847625    7666 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 03:32:02.940536    7666 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 03:32:03.030547    7666 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 03:32:03.030596    7666 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 03:32:03.036908    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:32:03.133240    7666 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:32:05.884996    7666 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.751790583s)
	I0520 03:32:05.885062    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 03:32:05.889918    7666 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 03:32:05.896462    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:32:05.901385    7666 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 03:32:05.969017    7666 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 03:32:06.049850    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:32:06.117315    7666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 03:32:06.123699    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:32:06.128298    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:32:06.210453    7666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 03:32:06.249865    7666 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 03:32:06.249945    7666 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 03:32:06.252171    7666 start.go:562] Will wait 60s for crictl version
	I0520 03:32:06.252228    7666 ssh_runner.go:195] Run: which crictl
	I0520 03:32:06.253596    7666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 03:32:06.266083    7666 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 03:32:06.266155    7666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:32:06.279147    7666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:32:06.298565    7666 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 03:32:06.298633    7666 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 03:32:06.300016    7666 kubeadm.go:877] updating cluster {Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51080 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 03:32:06.300063    7666 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 03:32:06.300102    7666 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:32:06.310192    7666 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:32:06.310201    7666 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 03:32:06.310242    7666 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:32:06.313250    7666 ssh_runner.go:195] Run: which lz4
	I0520 03:32:06.314494    7666 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 03:32:06.315775    7666 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 03:32:06.315787    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 03:32:07.013780    7666 docker.go:649] duration metric: took 699.325083ms to copy over tarball
	I0520 03:32:07.013836    7666 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 03:32:08.285023    7666 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.271196208s)
	I0520 03:32:08.285039    7666 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 03:32:08.300679    7666 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:32:08.303649    7666 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 03:32:08.308291    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:32:08.384721    7666 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:32:09.619597    7666 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.234884167s)
	I0520 03:32:09.619699    7666 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:32:09.633626    7666 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:32:09.633635    7666 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 03:32:09.633640    7666 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 03:32:09.651137    7666 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 03:32:09.652397    7666 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:32:09.652536    7666 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:32:09.652578    7666 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:32:09.652625    7666 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:32:09.652831    7666 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:32:09.652886    7666 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:32:09.654392    7666 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:32:09.657764    7666 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 03:32:09.660980    7666 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:32:09.660932    7666 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:32:09.661342    7666 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:32:09.661349    7666 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:32:09.661366    7666 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:32:09.661388    7666 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:32:09.661438    7666 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:32:10.006208    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 03:32:10.018060    7666 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 03:32:10.018078    7666 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 03:32:10.018126    7666 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 03:32:10.028228    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 03:32:10.028348    7666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0520 03:32:10.030075    7666 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 03:32:10.030086    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 03:32:10.034547    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 03:32:10.038032    7666 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 03:32:10.038042    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 03:32:10.047327    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:32:10.048753    7666 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 03:32:10.048768    7666 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:32:10.048799    7666 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 03:32:10.076917    7666 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 03:32:10.076952    7666 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 03:32:10.076967    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 03:32:10.076968    7666 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:32:10.077015    7666 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:32:10.077065    7666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 03:32:10.078575    7666 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 03:32:10.078589    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0520 03:32:10.082121    7666 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 03:32:10.082248    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:32:10.086574    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:32:10.091822    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 03:32:10.099806    7666 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 03:32:10.099826    7666 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:32:10.099891    7666 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:32:10.103307    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:32:10.114836    7666 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 03:32:10.114866    7666 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:32:10.114932    7666 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:32:10.134681    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 03:32:10.134691    7666 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 03:32:10.134711    7666 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:32:10.134755    7666 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:32:10.134799    7666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0520 03:32:10.151284    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 03:32:10.159419    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:32:10.164929    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 03:32:10.164945    7666 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 03:32:10.164963    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 03:32:10.190882    7666 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 03:32:10.190903    7666 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:32:10.190954    7666 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:32:10.218503    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 03:32:10.253185    7666 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 03:32:10.253198    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0520 03:32:10.317608    7666 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 03:32:10.317716    7666 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:32:10.383483    7666 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 03:32:10.383502    7666 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 03:32:10.383517    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0520 03:32:10.383529    7666 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 03:32:10.383545    7666 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:32:10.383595    7666 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:32:10.533429    7666 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0520 03:32:11.382716    7666 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 03:32:11.383219    7666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0520 03:32:11.388436    7666 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 03:32:11.388525    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 03:32:11.441984    7666 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 03:32:11.441998    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 03:32:11.684710    7666 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 03:32:11.684759    7666 cache_images.go:92] duration metric: took 2.051151042s to LoadCachedImages
	W0520 03:32:11.684801    7666 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0520 03:32:11.684807    7666 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 03:32:11.684871    7666 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-908000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 03:32:11.684937    7666 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 03:32:11.698406    7666 cni.go:84] Creating CNI manager for ""
	I0520 03:32:11.698418    7666 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:32:11.698425    7666 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 03:32:11.698433    7666 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-908000 NodeName:running-upgrade-908000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 03:32:11.698496    7666 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-908000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 03:32:11.698548    7666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 03:32:11.701538    7666 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 03:32:11.701573    7666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 03:32:11.704887    7666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 03:32:11.710108    7666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 03:32:11.714864    7666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 03:32:11.720238    7666 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 03:32:11.721669    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:32:11.802259    7666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:32:11.807415    7666 certs.go:68] Setting up /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000 for IP: 10.0.2.15
	I0520 03:32:11.807422    7666 certs.go:194] generating shared ca certs ...
	I0520 03:32:11.807431    7666 certs.go:226] acquiring lock for ca certs: {Name:mk32e3e05b22049132d2a360697fa20a693ff13f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:32:11.807646    7666 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.key
	I0520 03:32:11.807687    7666 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.key
	I0520 03:32:11.807692    7666 certs.go:256] generating profile certs ...
	I0520 03:32:11.807750    7666 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/client.key
	I0520 03:32:11.807761    7666 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67
	I0520 03:32:11.807771    7666 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 03:32:11.847715    7666 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67 ...
	I0520 03:32:11.847720    7666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67: {Name:mk047c75f2a92c66ec65cc7a55eea761e13c7c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:32:11.848091    7666 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67 ...
	I0520 03:32:11.848097    7666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67: {Name:mk239c5d0a5612304aa7317d21b5906b129de070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:32:11.848241    7666 certs.go:381] copying /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.crt.bcc26e67 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.crt
	I0520 03:32:11.848375    7666 certs.go:385] copying /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.key.bcc26e67 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.key
	I0520 03:32:11.848509    7666 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/proxy-client.key
	I0520 03:32:11.848620    7666 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818.pem (1338 bytes)
	W0520 03:32:11.848641    7666 certs.go:480] ignoring /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818_empty.pem, impossibly tiny 0 bytes
	I0520 03:32:11.848646    7666 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 03:32:11.848671    7666 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem (1078 bytes)
	I0520 03:32:11.848689    7666 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem (1123 bytes)
	I0520 03:32:11.848709    7666 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem (1675 bytes)
	I0520 03:32:11.848747    7666 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem (1708 bytes)
	I0520 03:32:11.849075    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 03:32:11.856028    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 03:32:11.863176    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 03:32:11.869894    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 03:32:11.876922    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 03:32:11.883734    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 03:32:11.890297    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 03:32:11.897834    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 03:32:11.905356    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818.pem --> /usr/share/ca-certificates/5818.pem (1338 bytes)
	I0520 03:32:11.912495    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem --> /usr/share/ca-certificates/58182.pem (1708 bytes)
	I0520 03:32:11.919431    7666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 03:32:11.926317    7666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 03:32:11.931363    7666 ssh_runner.go:195] Run: openssl version
	I0520 03:32:11.933051    7666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5818.pem && ln -fs /usr/share/ca-certificates/5818.pem /etc/ssl/certs/5818.pem"
	I0520 03:32:11.936219    7666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5818.pem
	I0520 03:32:11.937580    7666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:19 /usr/share/ca-certificates/5818.pem
	I0520 03:32:11.937602    7666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5818.pem
	I0520 03:32:11.939797    7666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5818.pem /etc/ssl/certs/51391683.0"
	I0520 03:32:11.942386    7666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58182.pem && ln -fs /usr/share/ca-certificates/58182.pem /etc/ssl/certs/58182.pem"
	I0520 03:32:11.945729    7666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58182.pem
	I0520 03:32:11.947356    7666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:19 /usr/share/ca-certificates/58182.pem
	I0520 03:32:11.947375    7666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58182.pem
	I0520 03:32:11.949235    7666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/58182.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 03:32:11.952192    7666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 03:32:11.954966    7666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:32:11.956462    7666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:32:11.956486    7666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:32:11.958362    7666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 03:32:11.961477    7666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 03:32:11.963165    7666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 03:32:11.964923    7666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 03:32:11.966791    7666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 03:32:11.968597    7666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 03:32:11.970502    7666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 03:32:11.972214    7666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 03:32:11.973822    7666 kubeadm.go:391] StartCluster: {Name:running-upgrade-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51080 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:32:11.973884    7666 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:32:11.984374    7666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 03:32:11.987526    7666 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 03:32:11.987532    7666 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 03:32:11.987536    7666 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 03:32:11.987558    7666 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 03:32:11.990156    7666 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:32:11.990195    7666 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-908000" does not appear in /Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:32:11.990209    7666 kubeconfig.go:62] /Users/jenkins/minikube-integration/18925-5286/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-908000" cluster setting kubeconfig missing "running-upgrade-908000" context setting]
	I0520 03:32:11.990390    7666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/kubeconfig: {Name:mk2c3e0adb489a0347b499d6142b492dee1b48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:32:11.991044    7666 kapi.go:59] client config for running-upgrade-908000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/client.key", CAFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1026fc580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:32:11.991835    7666 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 03:32:11.994825    7666 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-908000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 03:32:11.994832    7666 kubeadm.go:1154] stopping kube-system containers ...
	I0520 03:32:11.994874    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:32:12.006869    7666 docker.go:483] Stopping containers: [de13e5e180bc 92224ce8da5d 0cd9e2d7ade9 ef5128771d91 cab878002565 dc341bfdb38e 12e4c1a32e84 cb0a40237ac7 50dc532e6232 e272c0fad9b3 ab94818e2890 3f52236b81f9 d0d85f4180e3 4cebb318b8da]
	I0520 03:32:12.006942    7666 ssh_runner.go:195] Run: docker stop de13e5e180bc 92224ce8da5d 0cd9e2d7ade9 ef5128771d91 cab878002565 dc341bfdb38e 12e4c1a32e84 cb0a40237ac7 50dc532e6232 e272c0fad9b3 ab94818e2890 3f52236b81f9 d0d85f4180e3 4cebb318b8da
	I0520 03:32:12.018222    7666 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 03:32:12.121325    7666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:32:12.125829    7666 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 May 20 10:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 May 20 10:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May 20 10:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May 20 10:31 /etc/kubernetes/scheduler.conf
	
	I0520 03:32:12.125864    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/admin.conf
	I0520 03:32:12.129453    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:32:12.129480    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:32:12.133605    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/kubelet.conf
	I0520 03:32:12.137047    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:32:12.137074    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:32:12.140324    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/controller-manager.conf
	I0520 03:32:12.143203    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:32:12.143226    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:32:12.146042    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/scheduler.conf
	I0520 03:32:12.148839    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:32:12.148859    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:32:12.151424    7666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:32:12.154116    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:32:12.194213    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:32:12.597832    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:32:12.823406    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:32:12.853975    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:32:12.874782    7666 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:32:12.874857    7666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:32:13.377128    7666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:32:13.876914    7666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:32:13.881684    7666 api_server.go:72] duration metric: took 1.0069225s to wait for apiserver process to appear ...
	I0520 03:32:13.881694    7666 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:32:13.881703    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:18.883733    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:18.883804    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:23.884322    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:23.884404    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:28.885319    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:28.885364    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:33.886221    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:33.886290    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:38.887668    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:38.887748    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:43.889581    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:43.889671    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:48.891702    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:48.891792    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:53.894570    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:53.894661    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:32:58.897207    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:32:58.897286    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:03.899947    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:03.900034    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:08.902178    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:08.902267    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:13.902959    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:13.903163    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:33:13.918883    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:33:13.918966    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:33:13.931033    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:33:13.931098    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:33:13.941955    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:33:13.942017    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:33:13.952440    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:33:13.952511    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:33:13.963687    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:33:13.963752    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:33:13.982002    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:33:13.982069    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:33:13.991900    7666 logs.go:276] 0 containers: []
	W0520 03:33:13.991912    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:33:13.991964    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:33:14.002320    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:33:14.002337    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:33:14.002342    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:33:14.038375    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:33:14.038384    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:33:14.108645    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:33:14.108658    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:33:14.124657    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:33:14.124668    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:33:14.136076    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:33:14.136091    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:33:14.154306    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:33:14.154317    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:33:14.166829    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:33:14.166842    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:33:14.195574    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:33:14.195585    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:33:14.209934    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:33:14.209947    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:33:14.214183    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:33:14.214190    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:33:14.227881    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:33:14.227891    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:33:14.241625    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:33:14.241637    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:33:14.253200    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:33:14.253213    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:33:14.277871    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:33:14.277879    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:33:14.288918    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:33:14.288929    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:33:14.309424    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:33:14.309436    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:33:16.821812    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:21.824862    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:21.825271    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:33:21.871677    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:33:21.871803    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:33:21.891978    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:33:21.892085    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:33:21.906819    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:33:21.906887    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:33:21.920588    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:33:21.920657    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:33:21.931075    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:33:21.931157    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:33:21.941467    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:33:21.941534    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:33:21.951439    7666 logs.go:276] 0 containers: []
	W0520 03:33:21.951451    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:33:21.951519    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:33:21.962170    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:33:21.962189    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:33:21.962194    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:33:21.975985    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:33:21.975996    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:33:21.987800    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:33:21.987814    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:33:21.999286    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:33:21.999297    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:33:22.004033    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:33:22.004041    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:33:22.041343    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:33:22.041354    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:33:22.059891    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:33:22.059902    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:33:22.095785    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:33:22.095797    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:33:22.119884    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:33:22.119893    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:33:22.134959    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:33:22.134969    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:33:22.149896    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:33:22.149906    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:33:22.161404    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:33:22.161413    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:33:22.172357    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:33:22.172372    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:33:22.198019    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:33:22.198028    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:33:22.209354    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:33:22.209363    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:33:22.225787    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:33:22.225797    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:33:24.738925    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:29.741371    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:29.741875    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:33:29.790719    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:33:29.790880    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:33:29.811099    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:33:29.811194    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:33:29.825067    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:33:29.825133    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:33:29.838409    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:33:29.838475    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:33:29.849164    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:33:29.849232    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:33:29.859882    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:33:29.859947    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:33:29.870011    7666 logs.go:276] 0 containers: []
	W0520 03:33:29.870021    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:33:29.870075    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:33:29.880577    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:33:29.880592    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:33:29.880598    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:33:29.917097    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:33:29.917111    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:33:29.930437    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:33:29.930451    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:33:29.947634    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:33:29.947644    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:33:29.959323    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:33:29.959333    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:33:29.983606    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:33:29.983618    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:33:29.997306    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:33:29.997315    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:33:30.008833    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:33:30.008843    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:33:30.020726    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:33:30.020737    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:33:30.035630    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:33:30.035642    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:33:30.072498    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:33:30.072508    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:33:30.076640    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:33:30.076646    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:33:30.087971    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:33:30.087982    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:33:30.112644    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:33:30.112652    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:33:30.126495    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:33:30.126505    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:33:30.141118    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:33:30.141130    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:33:32.654669    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:37.657587    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:37.657959    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:33:37.692446    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:33:37.692567    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:33:37.711178    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:33:37.711279    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:33:37.729296    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:33:37.729362    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:33:37.740843    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:33:37.740952    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:33:37.751119    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:33:37.751195    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:33:37.761589    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:33:37.761657    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:33:37.771460    7666 logs.go:276] 0 containers: []
	W0520 03:33:37.771471    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:33:37.771530    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:33:37.781952    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:33:37.781976    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:33:37.781980    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:33:37.793592    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:33:37.793601    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:33:37.811703    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:33:37.811715    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:33:37.826032    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:33:37.826040    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:33:37.837466    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:33:37.837477    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:33:37.861662    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:33:37.861672    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:33:37.899437    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:33:37.899478    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:33:37.934632    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:33:37.934643    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:33:37.948713    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:33:37.948723    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:33:37.962476    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:33:37.962486    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:33:37.977215    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:33:37.977224    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:33:37.994715    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:33:37.994724    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:33:38.010931    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:33:38.010942    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:33:38.022439    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:33:38.022450    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:33:38.027080    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:33:38.027086    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:33:38.050826    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:33:38.050835    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:33:40.570340    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:45.572968    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:45.573438    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:33:45.612507    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:33:45.612635    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:33:45.634628    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:33:45.634753    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:33:45.650229    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:33:45.650304    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:33:45.663135    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:33:45.663211    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:33:45.677962    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:33:45.678028    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:33:45.688918    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:33:45.688990    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:33:45.699212    7666 logs.go:276] 0 containers: []
	W0520 03:33:45.699225    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:33:45.699282    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:33:45.710260    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:33:45.710278    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:33:45.710283    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:33:45.725053    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:33:45.725065    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:33:45.739351    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:33:45.739364    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:33:45.754060    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:33:45.754072    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:33:45.769549    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:33:45.769561    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:33:45.795191    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:33:45.795200    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:33:45.819403    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:33:45.819414    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:33:45.831765    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:33:45.831774    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:33:45.844267    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:33:45.844276    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:33:45.857736    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:33:45.857746    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:33:45.891885    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:33:45.891891    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:33:45.909777    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:33:45.909786    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:33:45.921663    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:33:45.921675    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:33:45.925777    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:33:45.925787    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:33:45.959284    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:33:45.959297    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:33:45.972828    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:33:45.972838    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:33:48.486265    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:33:53.489086    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:33:53.489506    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:33:53.527560    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:33:53.527705    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:33:53.548484    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:33:53.548574    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:33:53.563580    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:33:53.563653    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:33:53.575949    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:33:53.576026    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:33:53.586767    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:33:53.586829    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:33:53.597607    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:33:53.597674    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:33:53.608404    7666 logs.go:276] 0 containers: []
	W0520 03:33:53.608418    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:33:53.608492    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:33:53.619469    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:33:53.619486    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:33:53.619490    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:33:53.631836    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:33:53.631844    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:33:53.643582    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:33:53.643596    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:33:53.647977    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:33:53.647986    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:33:53.682598    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:33:53.682609    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:33:53.699569    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:33:53.699580    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:33:53.713455    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:33:53.713463    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:33:53.725211    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:33:53.725222    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:33:53.760761    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:33:53.760771    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:33:53.777666    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:33:53.777674    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:33:53.789119    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:33:53.789128    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:33:53.803290    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:33:53.803299    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:33:53.829094    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:33:53.829104    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:33:53.844656    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:33:53.844664    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:33:53.868329    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:33:53.868339    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:33:53.885859    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:33:53.885867    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:33:56.398873    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:01.401556    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:01.401826    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:01.421348    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:01.421446    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:01.445200    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:01.445264    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:01.457304    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:01.457371    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:01.467523    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:01.467587    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:01.481059    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:01.481122    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:01.491965    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:01.492042    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:01.502084    7666 logs.go:276] 0 containers: []
	W0520 03:34:01.502096    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:01.502146    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:01.512561    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:01.512577    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:01.512582    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:01.548190    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:01.548198    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:01.576334    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:01.576344    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:01.590687    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:01.590701    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:01.602467    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:01.602478    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:01.613762    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:01.613775    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:01.639222    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:01.639228    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:01.677608    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:01.677616    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:01.693079    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:01.693092    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:01.704729    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:01.704742    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:01.722105    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:01.722116    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:01.726973    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:01.726978    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:01.745335    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:01.745345    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:01.760132    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:01.760143    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:01.773750    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:01.773760    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:01.791438    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:01.791450    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:04.307601    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:09.310153    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:09.310250    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:09.328778    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:09.328852    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:09.346547    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:09.346627    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:09.360055    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:09.360131    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:09.372300    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:09.372374    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:09.388809    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:09.388865    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:09.399734    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:09.399800    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:09.410568    7666 logs.go:276] 0 containers: []
	W0520 03:34:09.410580    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:09.410638    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:09.421032    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:09.421050    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:09.421055    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:09.439774    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:09.439785    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:09.454256    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:09.454266    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:09.466419    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:09.466429    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:09.478706    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:09.478717    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:09.489897    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:09.489907    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:09.505620    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:09.505631    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:09.541057    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:09.541066    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:09.576614    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:09.576625    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:09.601569    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:09.601583    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:09.617010    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:09.617023    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:09.642146    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:09.642153    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:09.656413    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:09.656426    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:09.667999    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:09.668013    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:09.685595    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:09.685605    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:09.698743    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:09.698756    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:12.205414    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:17.207726    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:17.207847    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:17.231962    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:17.232037    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:17.242647    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:17.242708    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:17.252734    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:17.252800    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:17.262967    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:17.263024    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:17.273283    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:17.273347    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:17.283899    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:17.283963    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:17.301071    7666 logs.go:276] 0 containers: []
	W0520 03:34:17.301082    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:17.301137    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:17.311145    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:17.311162    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:17.311168    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:17.322800    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:17.322814    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:17.337963    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:17.337976    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:17.349647    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:17.349657    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:17.369794    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:17.369806    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:17.394180    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:17.394189    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:17.398927    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:17.398935    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:17.410657    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:17.410668    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:17.421769    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:17.421778    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:17.433060    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:17.433072    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:17.444407    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:17.444419    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:17.479965    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:17.479975    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:17.515064    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:17.515079    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:17.529002    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:17.529016    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:17.542842    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:17.542853    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:17.567319    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:17.567331    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:20.082453    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:25.085112    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:25.085456    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:25.119048    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:25.119177    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:25.144300    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:25.144390    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:25.161009    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:25.161082    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:25.172306    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:25.172378    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:25.182988    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:25.183054    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:25.193647    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:25.193716    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:25.204044    7666 logs.go:276] 0 containers: []
	W0520 03:34:25.204059    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:25.204124    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:25.214935    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:25.214953    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:25.214959    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:25.220419    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:25.220426    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:25.233932    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:25.233945    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:25.245384    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:25.245397    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:25.269851    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:25.269858    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:25.282600    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:25.282609    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:25.301129    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:25.301138    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:25.334483    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:25.334499    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:25.349120    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:25.349135    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:25.375040    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:25.375054    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:25.393371    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:25.393386    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:25.407424    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:25.407433    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:25.421640    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:25.421653    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:25.457462    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:25.457468    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:25.468620    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:25.468632    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:25.481132    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:25.481147    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:27.994470    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:32.996850    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:32.997241    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:33.033790    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:33.033932    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:33.054513    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:33.054623    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:33.070177    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:33.070255    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:33.086033    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:33.086103    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:33.096575    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:33.096640    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:33.107111    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:33.107181    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:33.117628    7666 logs.go:276] 0 containers: []
	W0520 03:34:33.117640    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:33.117698    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:33.131827    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:33.131844    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:33.131850    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:33.150034    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:33.150047    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:33.163815    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:33.163827    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:33.175823    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:33.175835    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:33.187430    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:33.187442    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:33.191982    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:33.191991    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:33.215947    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:33.215958    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:33.240303    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:33.240309    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:33.251751    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:33.251763    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:33.266175    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:33.266187    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:33.284102    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:33.284112    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:33.296900    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:33.296910    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:33.317847    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:33.317858    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:33.329347    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:33.329358    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:33.365741    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:33.365752    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:33.379941    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:33.379953    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:35.916257    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:40.918555    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:40.918735    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:40.930696    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:40.930777    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:40.941499    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:40.941568    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:40.954247    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:40.954339    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:40.966398    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:40.966476    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:40.984704    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:40.984777    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:40.996717    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:40.996797    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:41.008028    7666 logs.go:276] 0 containers: []
	W0520 03:34:41.008041    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:41.008110    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:41.020356    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:41.020377    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:41.020383    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:41.045270    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:41.045292    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:41.057778    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:41.057793    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:41.071404    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:41.071415    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:41.076223    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:41.076235    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:41.095946    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:41.095964    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:41.110226    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:41.110236    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:41.125616    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:41.125629    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:41.138797    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:41.138809    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:41.151697    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:41.151708    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:41.167114    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:41.167125    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:41.194418    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:41.194436    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:41.241222    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:41.241242    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:41.283258    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:41.283270    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:41.299384    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:41.299399    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:41.315137    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:41.315148    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:43.832485    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:48.835190    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:48.835570    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:48.870816    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:48.870954    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:48.891104    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:48.891220    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:48.906236    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:48.906308    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:48.921928    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:48.921994    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:48.932360    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:48.932434    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:48.948095    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:48.948180    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:48.958420    7666 logs.go:276] 0 containers: []
	W0520 03:34:48.958431    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:48.958485    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:48.969457    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:48.969477    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:48.969482    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:48.983168    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:48.983180    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:48.997566    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:48.997579    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:49.031845    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:49.031853    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:49.043299    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:49.043322    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:49.054904    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:49.054915    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:49.094166    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:49.094176    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:49.118312    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:49.118323    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:49.132500    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:49.132512    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:49.156816    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:49.156823    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:49.169560    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:49.169570    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:49.174281    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:49.174287    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:49.186792    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:49.186803    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:49.201096    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:49.201105    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:49.212806    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:49.212817    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:49.229809    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:49.229819    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:51.746072    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:34:56.748199    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:34:56.748314    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:34:56.759935    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:34:56.760003    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:34:56.771251    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:34:56.771328    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:34:56.781934    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:34:56.782005    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:34:56.793513    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:34:56.793598    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:34:56.804137    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:34:56.804202    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:34:56.815381    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:34:56.819304    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:34:56.829541    7666 logs.go:276] 0 containers: []
	W0520 03:34:56.829553    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:34:56.829612    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:34:56.841575    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:34:56.841593    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:34:56.841599    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:34:56.855892    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:34:56.855904    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:34:56.871411    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:34:56.871428    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:34:56.909870    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:34:56.909887    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:34:56.925541    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:34:56.925555    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:34:56.952325    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:34:56.952347    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:34:56.964992    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:34:56.965004    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:34:56.983632    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:34:56.983642    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:34:56.998718    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:34:56.998730    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:34:57.010770    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:34:57.010781    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:34:57.023828    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:34:57.023839    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:34:57.028475    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:34:57.028482    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:34:57.063544    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:34:57.063557    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:34:57.075939    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:34:57.075954    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:34:57.087667    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:34:57.087677    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:34:57.099196    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:34:57.099207    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:34:59.627147    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:04.629318    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:04.629482    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:04.640596    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:04.640677    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:04.651509    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:04.651580    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:04.662754    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:04.662822    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:04.674880    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:04.674961    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:04.686734    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:04.686802    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:04.697613    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:04.697682    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:04.708036    7666 logs.go:276] 0 containers: []
	W0520 03:35:04.708046    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:04.708101    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:04.719091    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:04.719109    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:04.719115    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:04.723998    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:04.724011    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:04.738681    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:04.738695    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:04.756015    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:04.756028    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:04.774615    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:04.774627    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:04.786288    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:04.786299    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:04.798105    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:04.798115    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:04.812113    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:04.812125    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:04.831760    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:04.831772    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:04.845765    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:04.845776    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:04.857379    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:04.857390    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:04.868987    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:04.868998    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:04.893733    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:04.893747    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:04.931026    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:04.931045    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:04.967174    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:04.967186    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:04.991785    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:04.991797    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:07.507561    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:12.509849    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:12.510316    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:12.552572    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:12.552715    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:12.574913    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:12.575037    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:12.590538    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:12.590628    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:12.603098    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:12.603163    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:12.614235    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:12.614304    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:12.625077    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:12.625148    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:12.635157    7666 logs.go:276] 0 containers: []
	W0520 03:35:12.635168    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:12.635224    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:12.646519    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:12.646536    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:12.646541    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:12.664540    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:12.664552    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:12.687874    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:12.687889    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:12.743897    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:12.743912    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:12.771101    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:12.771113    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:12.796441    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:12.796451    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:12.811205    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:12.811215    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:12.822870    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:12.822880    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:12.834991    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:12.835003    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:12.846618    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:12.846629    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:12.883444    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:12.883454    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:12.897456    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:12.897466    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:12.908653    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:12.908664    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:12.929522    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:12.929536    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:12.944484    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:12.944494    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:12.948613    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:12.948623    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:15.464356    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:20.464784    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:20.464969    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:20.484457    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:20.484546    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:20.502183    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:20.502251    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:20.513597    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:20.513666    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:20.524615    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:20.524678    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:20.535080    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:20.535145    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:20.545384    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:20.545444    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:20.555677    7666 logs.go:276] 0 containers: []
	W0520 03:35:20.555691    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:20.555750    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:20.566845    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:20.566864    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:20.566869    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:20.578424    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:20.578435    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:20.589556    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:20.589567    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:20.600979    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:20.600991    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:20.613312    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:20.613323    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:20.617979    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:20.617987    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:20.653035    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:20.653049    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:20.668934    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:20.668944    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:20.693792    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:20.693804    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:20.730364    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:20.730371    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:20.742767    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:20.742778    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:20.765986    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:20.765992    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:20.779534    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:20.779546    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:20.804744    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:20.804753    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:20.818456    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:20.818469    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:20.832766    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:20.832777    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:23.345884    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:28.348485    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:28.348695    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:28.360837    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:28.360914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:28.371958    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:28.372033    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:28.382806    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:28.382875    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:28.393812    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:28.393879    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:28.404451    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:28.404509    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:28.415380    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:28.415453    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:28.425082    7666 logs.go:276] 0 containers: []
	W0520 03:35:28.425097    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:28.425152    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:28.435925    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:28.435942    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:28.435948    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:28.440213    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:28.440221    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:28.454887    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:28.454897    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:28.469319    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:28.469330    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:28.486890    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:28.486901    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:28.511001    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:28.511008    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:28.547600    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:28.547613    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:28.561350    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:28.561361    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:28.572443    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:28.572455    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:28.588230    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:28.588240    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:28.623830    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:28.623846    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:28.639900    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:28.639912    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:28.652262    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:28.652271    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:28.665021    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:28.665030    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:28.679711    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:28.679722    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:28.691814    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:28.691825    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:31.217546    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:36.219893    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:36.220353    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:36.260041    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:36.260179    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:36.282149    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:36.282269    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:36.297270    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:36.297344    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:36.310296    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:36.310371    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:36.321075    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:36.321145    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:36.332028    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:36.332096    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:36.342373    7666 logs.go:276] 0 containers: []
	W0520 03:35:36.342387    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:36.342441    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:36.354182    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:36.354197    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:36.354202    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:36.368484    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:36.368497    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:36.385625    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:36.385638    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:36.423222    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:36.423232    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:36.442468    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:36.442477    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:36.460060    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:36.460071    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:36.471984    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:36.471996    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:36.484270    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:36.484282    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:36.495779    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:36.495790    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:36.507962    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:36.507970    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:36.519316    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:36.519326    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:36.553650    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:36.553656    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:36.558197    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:36.558204    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:36.570207    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:36.570221    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:36.594691    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:36.594699    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:36.608597    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:36.608607    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:39.133374    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:44.135847    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:44.135982    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:44.146628    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:44.146698    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:44.157240    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:44.157333    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:44.168673    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:44.168739    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:44.179145    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:44.179211    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:44.189655    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:44.189723    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:44.200624    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:44.200679    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:44.211806    7666 logs.go:276] 0 containers: []
	W0520 03:35:44.211815    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:44.211858    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:44.223202    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:44.223226    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:44.223231    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:44.259492    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:44.259511    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:44.272030    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:44.272042    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:44.276495    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:44.276503    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:44.290833    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:44.290845    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:44.303326    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:44.303342    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:44.317855    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:44.317869    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:44.355483    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:44.355503    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:44.372952    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:44.372965    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:44.393153    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:44.393162    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:44.405929    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:44.405940    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:44.431821    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:44.431832    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:44.456249    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:44.456264    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:44.475679    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:44.475693    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:44.501635    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:44.501648    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:44.514702    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:44.514716    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:47.030448    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:52.032951    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:52.033061    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:52.047851    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:52.047929    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:52.059330    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:52.059409    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:52.070145    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:52.070219    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:52.081652    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:52.081718    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:52.092279    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:52.092346    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:52.102873    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:52.102934    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:52.113094    7666 logs.go:276] 0 containers: []
	W0520 03:35:52.113104    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:52.113163    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:52.124261    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:52.124278    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:52.124284    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:52.160082    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:52.160093    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:52.175913    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:52.175922    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:52.198932    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:52.198944    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:52.210818    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:52.210829    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:52.215258    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:52.215266    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:52.229684    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:52.229695    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:52.254660    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:52.254670    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:52.272675    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:52.272691    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:52.310380    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:52.310391    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:52.324794    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:52.324805    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:52.343073    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:52.343083    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:52.356631    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:52.356641    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:52.373313    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:52.373324    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:52.384527    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:52.384538    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:52.397343    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:52.397355    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:54.911528    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:59.914210    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:59.914608    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:59.951176    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:59.951355    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:59.973046    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:59.973146    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:59.988229    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:59.988306    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:00.000326    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:36:00.000403    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:00.010699    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:36:00.010764    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:00.021709    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:36:00.021777    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:00.033977    7666 logs.go:276] 0 containers: []
	W0520 03:36:00.033992    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:00.034057    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:00.044533    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:36:00.044549    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:36:00.044555    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:36:00.056623    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:36:00.056634    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:36:00.067571    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:36:00.067580    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:36:00.090814    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:36:00.090826    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:36:00.104597    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:36:00.104609    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:36:00.116239    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:36:00.116251    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:36:00.128111    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:36:00.128122    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:36:00.145869    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:36:00.145880    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:36:00.157413    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:00.157423    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:00.180787    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:36:00.180798    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:00.193884    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:00.193896    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:00.229276    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:00.229285    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:00.264458    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:36:00.264470    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:36:00.282220    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:36:00.282231    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:36:00.303355    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:00.303365    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:00.307442    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:36:00.307449    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:36:02.824950    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:07.827130    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:07.827318    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:36:07.838364    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:36:07.838431    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:36:07.850044    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:36:07.850122    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:36:07.866625    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:36:07.866698    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:07.877685    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:36:07.877775    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:07.888594    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:36:07.888670    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:07.900769    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:36:07.900844    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:07.910948    7666 logs.go:276] 0 containers: []
	W0520 03:36:07.910959    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:07.911019    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:07.922006    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:36:07.922023    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:36:07.922040    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:36:07.955022    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:36:07.955037    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:36:07.969481    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:36:07.969495    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:36:07.981287    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:36:07.981299    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:36:07.998463    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:07.998476    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:08.023364    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:08.023379    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:08.059246    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:36:08.059260    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:36:08.074184    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:36:08.074195    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:36:08.093741    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:36:08.093753    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:08.105482    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:36:08.105500    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:36:08.118115    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:36:08.118128    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:36:08.129815    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:36:08.129827    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:36:08.141582    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:08.141596    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:08.178278    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:36:08.178290    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:36:08.194241    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:08.194252    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:08.198908    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:36:08.198916    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:36:10.715071    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:15.717311    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:15.717381    7666 kubeadm.go:591] duration metric: took 4m3.734372042s to restartPrimaryControlPlane
	W0520 03:36:15.717451    7666 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 03:36:15.717481    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 03:36:16.704955    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 03:36:16.709871    7666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:36:16.712924    7666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:36:16.716141    7666 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 03:36:16.716148    7666 kubeadm.go:156] found existing configuration files:
	
	I0520 03:36:16.716204    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/admin.conf
	I0520 03:36:16.718651    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 03:36:16.718672    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:36:16.721697    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/kubelet.conf
	I0520 03:36:16.724859    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 03:36:16.724882    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:36:16.727466    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/controller-manager.conf
	I0520 03:36:16.730119    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 03:36:16.730137    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:36:16.733452    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/scheduler.conf
	I0520 03:36:16.736426    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 03:36:16.736449    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:36:16.739114    7666 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 03:36:16.755945    7666 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 03:36:16.755973    7666 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 03:36:16.803753    7666 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 03:36:16.803804    7666 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 03:36:16.803851    7666 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 03:36:16.852069    7666 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 03:36:16.860336    7666 out.go:204]   - Generating certificates and keys ...
	I0520 03:36:16.860369    7666 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 03:36:16.860410    7666 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 03:36:16.860447    7666 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 03:36:16.860478    7666 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 03:36:16.860509    7666 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 03:36:16.860552    7666 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 03:36:16.860587    7666 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 03:36:16.860620    7666 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 03:36:16.860657    7666 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 03:36:16.860696    7666 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 03:36:16.860718    7666 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 03:36:16.860745    7666 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 03:36:16.925027    7666 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 03:36:16.977093    7666 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 03:36:17.381800    7666 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 03:36:17.443869    7666 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 03:36:17.473911    7666 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 03:36:17.475005    7666 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 03:36:17.475076    7666 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 03:36:17.561370    7666 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 03:36:17.565548    7666 out.go:204]   - Booting up control plane ...
	I0520 03:36:17.565595    7666 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 03:36:17.565644    7666 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 03:36:17.565693    7666 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 03:36:17.565778    7666 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 03:36:17.565860    7666 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 03:36:22.066282    7666 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501677 seconds
	I0520 03:36:22.066343    7666 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 03:36:22.070671    7666 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 03:36:22.578963    7666 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 03:36:22.579087    7666 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-908000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 03:36:23.082344    7666 kubeadm.go:309] [bootstrap-token] Using token: mz6kye.uxn8tzbb1tomjbed
	I0520 03:36:23.087228    7666 out.go:204]   - Configuring RBAC rules ...
	I0520 03:36:23.087298    7666 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 03:36:23.087346    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 03:36:23.088989    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 03:36:23.093744    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 03:36:23.094840    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 03:36:23.095799    7666 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 03:36:23.098997    7666 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 03:36:23.280309    7666 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 03:36:23.487545    7666 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 03:36:23.487987    7666 kubeadm.go:309] 
	I0520 03:36:23.488015    7666 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 03:36:23.488024    7666 kubeadm.go:309] 
	I0520 03:36:23.488062    7666 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 03:36:23.488065    7666 kubeadm.go:309] 
	I0520 03:36:23.488081    7666 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 03:36:23.488112    7666 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 03:36:23.488154    7666 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 03:36:23.488162    7666 kubeadm.go:309] 
	I0520 03:36:23.488200    7666 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 03:36:23.488202    7666 kubeadm.go:309] 
	I0520 03:36:23.488224    7666 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 03:36:23.488228    7666 kubeadm.go:309] 
	I0520 03:36:23.488254    7666 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 03:36:23.488293    7666 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 03:36:23.488343    7666 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 03:36:23.488348    7666 kubeadm.go:309] 
	I0520 03:36:23.488396    7666 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 03:36:23.488442    7666 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 03:36:23.488445    7666 kubeadm.go:309] 
	I0520 03:36:23.488505    7666 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mz6kye.uxn8tzbb1tomjbed \
	I0520 03:36:23.488559    7666 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 \
	I0520 03:36:23.488569    7666 kubeadm.go:309] 	--control-plane 
	I0520 03:36:23.488571    7666 kubeadm.go:309] 
	I0520 03:36:23.488609    7666 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 03:36:23.488613    7666 kubeadm.go:309] 
	I0520 03:36:23.488648    7666 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mz6kye.uxn8tzbb1tomjbed \
	I0520 03:36:23.488701    7666 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 
	I0520 03:36:23.488758    7666 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 03:36:23.488845    7666 cni.go:84] Creating CNI manager for ""
	I0520 03:36:23.488855    7666 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:36:23.497108    7666 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 03:36:23.501267    7666 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 03:36:23.505118    7666 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 03:36:23.510053    7666 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 03:36:23.510109    7666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:36:23.510118    7666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-908000 minikube.k8s.io/updated_at=2024_05_20T03_36_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=running-upgrade-908000 minikube.k8s.io/primary=true
	I0520 03:36:23.551313    7666 kubeadm.go:1107] duration metric: took 41.249417ms to wait for elevateKubeSystemPrivileges
	I0520 03:36:23.551319    7666 ops.go:34] apiserver oom_adj: -16
	W0520 03:36:23.551420    7666 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 03:36:23.551425    7666 kubeadm.go:393] duration metric: took 4m11.582283292s to StartCluster
	I0520 03:36:23.551434    7666 settings.go:142] acquiring lock: {Name:mkc3af27fbea4a81f456d1d023b17ad3b4bc78ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:36:23.551609    7666 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:36:23.551989    7666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/kubeconfig: {Name:mk2c3e0adb489a0347b499d6142b492dee1b48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:36:23.552211    7666 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:36:23.554203    7666 out.go:177] * Verifying Kubernetes components...
	I0520 03:36:23.552222    7666 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 03:36:23.552283    7666 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:36:23.562318    7666 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-908000"
	I0520 03:36:23.562318    7666 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-908000"
	I0520 03:36:23.562337    7666 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-908000"
	W0520 03:36:23.562342    7666 addons.go:243] addon storage-provisioner should already be in state true
	I0520 03:36:23.562353    7666 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-908000"
	I0520 03:36:23.562330    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:36:23.562364    7666 host.go:66] Checking if "running-upgrade-908000" exists ...
	I0520 03:36:23.563361    7666 kapi.go:59] client config for running-upgrade-908000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/client.key", CAFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1026fc580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:36:23.563484    7666 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-908000"
	W0520 03:36:23.563488    7666 addons.go:243] addon default-storageclass should already be in state true
	I0520 03:36:23.563495    7666 host.go:66] Checking if "running-upgrade-908000" exists ...
	I0520 03:36:23.568223    7666 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:36:23.571280    7666 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:36:23.571286    7666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 03:36:23.571292    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0520 03:36:23.571850    7666 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 03:36:23.571856    7666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 03:36:23.571860    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0520 03:36:23.657095    7666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:36:23.661827    7666 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:36:23.661871    7666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:36:23.666060    7666 api_server.go:72] duration metric: took 113.841833ms to wait for apiserver process to appear ...
	I0520 03:36:23.666068    7666 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:36:23.666074    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:23.729172    7666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:36:23.729794    7666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 03:36:28.667655    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:28.667741    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:33.668282    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:33.668303    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:38.668506    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:38.668534    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:43.669288    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:43.669328    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:48.669909    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:48.669934    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:53.670650    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:53.670683    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 03:36:54.079027    7666 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 03:36:54.083355    7666 out.go:177] * Enabled addons: storage-provisioner
	I0520 03:36:54.091247    7666 addons.go:505] duration metric: took 30.539599042s for enable addons: enabled=[storage-provisioner]
	I0520 03:36:58.671623    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:58.671663    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:03.672974    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:03.673014    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:08.674562    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:08.674588    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:13.676678    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:13.676708    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:18.678797    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:18.678836    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:23.680941    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:23.681033    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:23.705969    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:23.706041    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:23.723693    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:23.723763    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:23.736318    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:23.736394    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:23.746979    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:23.747053    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:23.759898    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:23.759969    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:23.770618    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:23.770682    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:23.780711    7666 logs.go:276] 0 containers: []
	W0520 03:37:23.780721    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:23.780780    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:23.793030    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:23.793046    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:23.793052    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:23.829304    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:23.829317    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:23.902310    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:23.902326    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:23.916863    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:23.916874    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:23.928605    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:23.928618    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:23.946096    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:23.946110    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:23.951079    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:23.951086    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:23.965964    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:23.965973    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:23.977564    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:23.977579    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:23.997452    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:23.997463    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:24.009384    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:24.009395    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:24.020945    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:24.020957    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:24.045951    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:24.045961    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:26.559436    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:31.560199    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:31.560331    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:31.579430    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:31.579492    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:31.590235    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:31.590300    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:31.600550    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:31.600613    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:31.611272    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:31.611333    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:31.624221    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:31.624283    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:31.634485    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:31.634555    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:31.644838    7666 logs.go:276] 0 containers: []
	W0520 03:37:31.644850    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:31.644901    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:31.655156    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:31.655175    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:31.655179    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:31.666867    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:31.666878    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:31.685308    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:31.685318    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:31.710358    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:31.710365    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:31.728147    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:31.728157    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:31.739787    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:31.739796    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:31.755423    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:31.755432    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:31.770752    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:31.770762    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:31.782392    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:31.782402    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:31.793671    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:31.793681    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:31.830446    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:31.830468    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:31.835747    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:31.835765    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:31.874636    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:31.874647    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:34.401618    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:39.403803    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:39.403888    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:39.415363    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:39.415430    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:39.425845    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:39.425916    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:39.443663    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:39.443729    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:39.455083    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:39.455154    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:39.466385    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:39.466459    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:39.477178    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:39.477250    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:39.488839    7666 logs.go:276] 0 containers: []
	W0520 03:37:39.488854    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:39.488914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:39.500085    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:39.500100    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:39.500105    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:39.511803    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:39.511813    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:39.523409    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:39.523422    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:39.538495    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:39.538504    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:39.559159    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:39.559168    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:39.582349    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:39.582362    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:39.606105    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:39.606120    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:39.643979    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:39.643994    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:39.659206    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:39.659220    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:39.673643    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:39.673656    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:39.685531    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:39.685546    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:39.697342    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:39.697353    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:39.732925    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:39.732932    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:42.239134    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:47.241298    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:47.241388    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:47.252431    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:47.252505    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:47.263825    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:47.263897    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:47.275726    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:47.275801    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:47.287278    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:47.287353    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:47.297641    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:47.297714    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:47.308327    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:47.308403    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:47.319659    7666 logs.go:276] 0 containers: []
	W0520 03:37:47.319670    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:47.319735    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:47.331125    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:47.331145    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:47.331153    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:47.336015    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:47.336029    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:47.384137    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:47.384148    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:47.398600    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:47.398611    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:47.412593    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:47.412604    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:47.429584    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:47.429595    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:47.445046    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:47.445057    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:47.469333    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:47.469349    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:47.505692    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:47.505702    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:47.523929    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:47.523939    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:47.535743    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:47.535755    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:47.551488    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:47.551499    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:47.562940    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:47.562950    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:50.076304    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:55.078427    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:55.078530    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:55.089980    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:55.090057    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:55.107769    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:55.107849    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:55.119292    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:55.119364    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:55.131106    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:55.131178    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:55.142023    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:55.142103    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:55.153815    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:55.153892    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:55.165846    7666 logs.go:276] 0 containers: []
	W0520 03:37:55.165858    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:55.165923    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:55.176858    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:55.176874    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:55.176879    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:55.181814    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:55.181826    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:55.196624    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:55.196632    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:55.209995    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:55.210006    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:55.222748    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:55.222761    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:55.247362    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:55.247372    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:55.259513    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:55.259527    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:55.277678    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:55.277689    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:55.313450    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:55.313459    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:55.349535    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:55.349546    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:55.365943    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:55.365953    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:55.377650    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:55.377661    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:55.393021    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:55.393031    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:57.919475    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:02.919860    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:02.919951    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:02.931261    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:02.931329    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:02.942126    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:02.942189    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:02.953146    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:02.953209    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:02.964180    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:02.964249    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:02.975357    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:02.975427    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:02.986750    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:02.986820    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:02.998093    7666 logs.go:276] 0 containers: []
	W0520 03:38:02.998106    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:02.998167    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:03.010773    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:03.010792    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:03.010798    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:03.023938    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:03.023949    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:03.037621    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:03.037634    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:03.064452    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:03.064462    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:03.104082    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:03.104098    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:03.109264    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:03.109275    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:03.147654    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:03.147673    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:03.162671    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:03.162681    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:03.180487    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:03.180497    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:03.192424    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:03.192435    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:03.206364    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:03.206375    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:03.218519    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:03.218533    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:03.233956    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:03.233968    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:05.754724    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:10.755652    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:10.755734    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:10.766585    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:10.766657    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:10.777723    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:10.777790    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:10.788743    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:10.788812    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:10.800450    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:10.800516    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:10.811907    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:10.811976    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:10.822916    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:10.822986    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:10.834699    7666 logs.go:276] 0 containers: []
	W0520 03:38:10.834709    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:10.834768    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:10.845950    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:10.845967    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:10.845973    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:10.857957    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:10.857969    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:10.884954    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:10.884973    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:10.925252    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:10.925264    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:10.930399    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:10.930412    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:10.967851    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:10.967864    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:10.980074    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:10.980085    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:11.001323    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:11.001337    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:11.013655    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:11.013668    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:11.028763    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:11.028774    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:11.048059    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:11.048068    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:11.062065    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:11.062075    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:11.074553    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:11.074564    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:13.595571    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:18.596117    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:18.596193    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:18.607730    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:18.607804    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:18.618995    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:18.619064    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:18.630266    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:18.630333    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:18.644037    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:18.644108    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:18.655823    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:18.655896    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:18.667299    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:18.667373    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:18.678484    7666 logs.go:276] 0 containers: []
	W0520 03:38:18.678494    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:18.678551    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:18.689994    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:18.690009    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:18.690015    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:18.702900    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:18.702912    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:18.728389    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:18.728403    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:18.741051    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:18.741064    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:18.780747    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:18.780758    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:18.823028    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:18.823039    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:18.839345    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:18.839357    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:18.855621    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:18.855635    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:18.868859    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:18.868871    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:18.887011    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:18.887023    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:18.899548    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:18.899560    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:18.904325    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:18.904332    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:18.919111    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:18.919123    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:21.433720    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:26.436006    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:26.436123    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:26.447843    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:26.447914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:26.458298    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:26.458366    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:26.469043    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:26.469116    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:26.480618    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:26.480685    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:26.499631    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:26.499673    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:26.510921    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:26.510964    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:26.521927    7666 logs.go:276] 0 containers: []
	W0520 03:38:26.521936    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:26.521962    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:26.533593    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:26.533607    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:26.533611    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:26.559804    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:26.559819    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:26.599257    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:26.599272    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:26.603963    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:26.603973    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:26.642135    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:26.642148    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:26.657588    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:26.657601    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:26.672795    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:26.672812    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:26.688805    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:26.688817    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:26.701217    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:26.701229    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:26.713738    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:26.713749    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:26.726614    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:26.726626    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:26.738649    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:26.738660    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:26.759701    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:26.759716    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:29.274901    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:34.276814    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:34.276974    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:34.287834    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:34.287914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:34.298224    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:34.298293    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:34.308490    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:34.308560    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:34.318950    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:34.319011    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:34.329935    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:34.330004    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:34.340331    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:34.340401    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:34.350734    7666 logs.go:276] 0 containers: []
	W0520 03:38:34.350745    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:34.350798    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:34.360888    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:34.360904    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:34.360910    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:34.379281    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:34.379295    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:34.391689    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:34.391700    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:34.431417    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:34.431428    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:34.444241    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:34.444253    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:34.467131    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:34.467143    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:34.480263    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:34.480273    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:34.497509    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:34.497521    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:34.510659    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:34.510675    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:34.535225    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:34.535236    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:34.572847    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:34.572865    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:34.577809    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:34.577816    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:34.593675    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:34.593688    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:37.111014    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:42.113259    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:42.113483    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:42.137259    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:42.137377    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:42.155457    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:42.155535    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:42.168004    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:38:42.168081    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:42.178714    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:42.178778    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:42.189296    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:42.189369    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:42.200501    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:42.200567    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:42.214232    7666 logs.go:276] 0 containers: []
	W0520 03:38:42.214243    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:42.214298    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:42.225506    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:42.225523    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:42.225529    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:42.244551    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:42.244561    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:42.258514    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:42.258529    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:42.270612    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:42.270619    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:42.298011    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:42.298021    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:42.337817    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:42.337832    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:42.354422    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:42.354431    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:42.366998    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:38:42.367011    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:38:42.378831    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:42.378843    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:42.391438    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:42.391453    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:42.404327    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:42.404340    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:42.409576    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:42.409585    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:42.447906    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:38:42.447919    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:38:42.460805    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:42.460817    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:42.474504    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:42.474515    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:44.997144    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:49.999848    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:50.000358    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:50.042086    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:50.042242    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:50.063692    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:50.063789    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:50.078100    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:38:50.078184    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:50.089726    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:50.089794    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:50.100863    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:50.100935    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:50.111914    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:50.111986    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:50.122649    7666 logs.go:276] 0 containers: []
	W0520 03:38:50.122667    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:50.122720    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:50.142943    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:50.142962    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:50.142967    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:50.180244    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:50.180255    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:50.199417    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:50.199429    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:50.212459    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:50.212470    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:50.234577    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:38:50.234593    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:38:50.247549    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:50.247560    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:50.260701    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:50.260710    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:50.273031    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:50.273041    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:50.300113    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:50.300123    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:50.312423    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:50.312435    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:50.352484    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:50.352502    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:50.363505    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:50.363520    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:50.380263    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:50.380278    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:50.396004    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:50.396017    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:50.412894    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:38:50.412906    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:38:52.926511    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:57.927356    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:57.927599    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:57.950326    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:57.950445    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:57.966457    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:57.966535    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:57.978545    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:38:57.978611    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:57.989432    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:57.989503    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:57.999950    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:58.000019    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:58.011127    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:58.011197    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:58.021948    7666 logs.go:276] 0 containers: []
	W0520 03:38:58.021959    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:58.022014    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:58.032724    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:58.032743    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:58.032748    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:58.069065    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:58.069076    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:58.086573    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:58.086583    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:58.098377    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:58.098385    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:58.103123    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:38:58.103133    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:38:58.115699    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:58.115710    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:58.128346    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:58.128358    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:58.147647    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:58.147658    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:58.160760    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:58.160772    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:58.200510    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:58.200527    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:58.215293    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:38:58.215309    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:38:58.227387    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:58.227397    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:58.245323    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:58.245336    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:58.259673    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:58.259687    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:58.272685    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:58.272696    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:00.802181    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:05.804452    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:05.804680    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:05.826505    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:05.826608    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:05.842171    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:05.842239    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:05.854903    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:05.854978    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:05.866405    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:05.866476    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:05.877344    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:05.877415    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:05.888083    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:05.888153    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:05.897964    7666 logs.go:276] 0 containers: []
	W0520 03:39:05.897974    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:05.898028    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:05.908110    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:05.908128    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:05.908133    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:05.921282    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:05.921293    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:05.932817    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:05.932827    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:05.937195    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:05.937207    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:05.977548    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:05.977560    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:05.991313    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:05.991323    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:06.003841    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:06.003852    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:06.025136    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:06.025148    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:06.052240    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:06.052252    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:06.065245    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:06.065256    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:06.104755    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:06.104772    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:06.124667    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:06.124682    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:06.151837    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:06.151851    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:06.167468    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:06.167480    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:06.179881    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:06.179892    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:08.699073    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:13.701355    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:13.701529    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:13.717301    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:13.717381    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:13.730212    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:13.730285    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:13.741503    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:13.741572    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:13.751834    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:13.751899    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:13.761869    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:13.761931    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:13.772372    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:13.772439    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:13.782807    7666 logs.go:276] 0 containers: []
	W0520 03:39:13.782817    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:13.782866    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:13.795408    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:13.795424    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:13.795429    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:13.810153    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:13.810163    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:13.821760    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:13.821771    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:13.833031    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:13.833044    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:13.847969    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:13.847979    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:13.884875    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:13.884882    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:13.900295    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:13.900310    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:13.912900    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:13.912911    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:13.931566    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:13.931577    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:13.957613    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:13.957629    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:13.962591    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:13.962605    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:14.001821    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:14.001829    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:14.014766    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:14.014777    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:14.027723    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:14.027735    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:14.040461    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:14.040474    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:16.556504    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:21.558667    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:21.558872    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:21.577327    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:21.577417    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:21.590789    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:21.590865    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:21.602065    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:21.602133    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:21.612528    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:21.612599    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:21.622611    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:21.622688    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:21.633302    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:21.633379    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:21.643635    7666 logs.go:276] 0 containers: []
	W0520 03:39:21.643645    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:21.643698    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:21.654017    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:21.654033    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:21.654038    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:21.667658    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:21.667670    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:21.678830    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:21.678839    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:21.714049    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:21.714059    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:21.728442    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:21.728454    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:21.753753    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:21.753761    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:21.788958    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:21.788971    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:21.801647    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:21.801659    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:21.814182    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:21.814195    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:21.836073    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:21.836084    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:21.841384    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:21.841394    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:21.856165    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:21.856176    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:21.868477    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:21.868490    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:21.884852    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:21.884867    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:21.897253    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:21.897266    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:24.411648    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:29.413910    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:29.414175    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:29.436727    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:29.436845    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:29.451562    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:29.451641    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:29.464079    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:29.464157    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:29.475131    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:29.475192    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:29.485391    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:29.485456    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:29.497014    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:29.497083    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:29.506758    7666 logs.go:276] 0 containers: []
	W0520 03:39:29.506772    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:29.506830    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:29.521125    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:29.521144    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:29.521149    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:29.538156    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:29.538167    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:29.553115    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:29.553125    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:29.570178    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:29.570189    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:29.594091    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:29.594098    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:29.629620    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:29.629630    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:29.643744    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:29.643759    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:29.655243    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:29.655254    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:29.670397    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:29.670408    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:29.687019    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:29.687032    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:29.699748    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:29.699759    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:29.716917    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:29.716928    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:29.729315    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:29.729327    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:29.734306    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:29.734313    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:29.772959    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:29.772971    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:32.289634    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:37.291878    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:37.292111    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:37.315972    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:37.316102    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:37.331519    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:37.331596    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:37.343932    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:37.343999    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:37.357551    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:37.357626    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:37.368160    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:37.368240    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:37.378674    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:37.378746    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:37.392159    7666 logs.go:276] 0 containers: []
	W0520 03:39:37.392169    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:37.392225    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:37.402610    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:37.402633    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:37.402639    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:37.414047    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:37.414061    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:37.442774    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:37.442784    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:37.454075    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:37.454086    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:37.469129    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:37.469140    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:37.484460    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:37.484469    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:37.502903    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:37.502912    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:37.514883    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:37.514898    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:37.551795    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:37.551804    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:37.566083    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:37.566095    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:37.577442    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:37.577452    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:37.591424    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:37.591435    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:37.616474    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:37.616491    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:37.621788    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:37.621800    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:37.637949    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:37.637962    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:40.181113    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:45.183425    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:45.183708    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:45.217543    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:45.217668    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:45.235367    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:45.235455    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:45.249482    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:45.249560    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:45.261370    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:45.261445    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:45.272581    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:45.272657    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:45.284004    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:45.284075    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:45.294699    7666 logs.go:276] 0 containers: []
	W0520 03:39:45.294710    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:45.294768    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:45.305605    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:45.305627    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:45.305633    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:45.320216    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:45.320229    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:45.363349    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:45.363363    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:45.377745    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:45.377758    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:45.389472    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:45.389486    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:45.407982    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:45.407994    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:45.412599    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:45.412608    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:45.424318    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:45.424331    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:45.445895    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:45.445906    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:45.458535    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:45.458546    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:45.482163    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:45.482177    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:45.495193    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:45.495205    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:45.507594    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:45.507605    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:45.520009    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:45.520020    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:45.532816    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:45.532831    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:48.074748    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:53.077130    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:53.077571    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:53.113109    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:53.113245    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:53.133643    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:53.133751    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:53.148178    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:53.148257    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:53.160214    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:53.160285    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:53.172279    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:53.172358    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:53.182708    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:53.182780    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:53.192559    7666 logs.go:276] 0 containers: []
	W0520 03:39:53.192570    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:53.192626    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:53.204146    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:53.204165    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:53.204170    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:53.229262    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:53.229271    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:53.266602    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:53.266612    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:53.280873    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:53.280882    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:53.292287    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:53.292296    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:53.303824    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:53.303836    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:53.315890    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:53.315905    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:53.331460    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:53.331475    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:53.356056    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:53.356068    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:53.361015    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:53.361022    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:53.375583    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:53.375593    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:53.389310    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:53.389321    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:53.423873    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:53.423883    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:53.436205    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:53.436216    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:53.451907    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:53.451915    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:55.966682    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:00.968977    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:00.969104    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:40:00.980148    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:40:00.980225    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:40:00.991711    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:40:00.991783    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:40:01.002663    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:40:01.002739    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:40:01.013812    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:40:01.013877    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:40:01.023774    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:40:01.023844    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:40:01.034502    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:40:01.034564    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:40:01.044844    7666 logs.go:276] 0 containers: []
	W0520 03:40:01.044856    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:40:01.044912    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:40:01.055684    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:40:01.055701    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:40:01.055709    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:40:01.095381    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:40:01.095393    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:40:01.111162    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:40:01.111175    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:40:01.126144    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:40:01.126153    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:40:01.138253    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:40:01.138268    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:40:01.142997    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:40:01.143011    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:40:01.155784    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:40:01.155799    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:40:01.168918    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:40:01.168931    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:40:01.187236    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:40:01.187249    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:40:01.200612    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:40:01.200622    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:40:01.212733    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:40:01.212742    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:40:01.251056    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:40:01.251074    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:40:01.263889    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:40:01.263900    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:40:01.282165    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:40:01.282179    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:40:01.294619    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:40:01.294628    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:40:03.820738    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:08.821622    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:08.821746    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:40:08.833022    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:40:08.833100    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:40:08.843850    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:40:08.843917    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:40:08.854255    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:40:08.854318    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:40:08.864519    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:40:08.864588    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:40:08.874449    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:40:08.874513    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:40:08.885034    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:40:08.885100    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:40:08.895715    7666 logs.go:276] 0 containers: []
	W0520 03:40:08.895733    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:40:08.895792    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:40:08.906940    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:40:08.906958    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:40:08.906963    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:40:08.918434    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:40:08.918445    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:40:08.942162    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:40:08.942169    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:40:08.953737    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:40:08.953748    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:40:08.991858    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:40:08.991869    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:40:08.996229    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:40:08.996235    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:40:09.010879    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:40:09.010890    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:40:09.022335    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:40:09.022345    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:40:09.061892    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:40:09.061903    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:40:09.078788    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:40:09.078799    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:40:09.090644    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:40:09.090655    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:40:09.102421    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:40:09.102434    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:40:09.117755    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:40:09.117766    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:40:09.129225    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:40:09.129234    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:40:09.142089    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:40:09.142104    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:40:11.665070    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:16.665517    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:16.665645    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:40:16.678248    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:40:16.678323    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:40:16.689162    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:40:16.689229    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:40:16.699978    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:40:16.700047    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:40:16.710410    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:40:16.710476    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:40:16.720932    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:40:16.720993    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:40:16.731803    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:40:16.731870    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:40:16.742066    7666 logs.go:276] 0 containers: []
	W0520 03:40:16.742077    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:40:16.742130    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:40:16.752321    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:40:16.752375    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:40:16.752381    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:40:16.766801    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:40:16.766811    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:40:16.782250    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:40:16.782264    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:40:16.794552    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:40:16.794564    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:40:16.806007    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:40:16.806018    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:40:16.820463    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:40:16.820478    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:40:16.837399    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:40:16.837409    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:40:16.848706    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:40:16.848716    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:40:16.886139    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:40:16.886150    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:40:16.900571    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:40:16.900582    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:40:16.912664    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:40:16.912675    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:40:16.924858    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:40:16.924869    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:40:16.936599    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:40:16.936609    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:40:16.960589    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:40:16.960597    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:40:16.965325    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:40:16.965334    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:40:19.504082    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:24.505454    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:24.509020    7666 out.go:177] 
	W0520 03:40:24.511983    7666 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 03:40:24.511993    7666 out.go:239] * 
	* 
	W0520 03:40:24.512695    7666 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:40:24.523971    7666 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-908000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-20 03:40:24.616204 -0700 PDT m=+1284.144023251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-908000 -n running-upgrade-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-908000 -n running-upgrade-908000: exit status 2 (15.688648791s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-908000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-897000          | force-systemd-flag-897000 | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-703000              | force-systemd-env-703000  | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-703000           | force-systemd-env-703000  | jenkins | v1.33.1 | 20 May 24 03:30 PDT | 20 May 24 03:30 PDT |
	| start   | -p docker-flags-521000                | docker-flags-521000       | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-897000             | force-systemd-flag-897000 | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-897000          | force-systemd-flag-897000 | jenkins | v1.33.1 | 20 May 24 03:30 PDT | 20 May 24 03:30 PDT |
	| start   | -p cert-expiration-708000             | cert-expiration-708000    | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-521000 ssh               | docker-flags-521000       | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-521000 ssh               | docker-flags-521000       | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-521000                | docker-flags-521000       | jenkins | v1.33.1 | 20 May 24 03:30 PDT | 20 May 24 03:30 PDT |
	| start   | -p cert-options-840000                | cert-options-840000       | jenkins | v1.33.1 | 20 May 24 03:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-840000 ssh               | cert-options-840000       | jenkins | v1.33.1 | 20 May 24 03:31 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840000 -- sudo        | cert-options-840000       | jenkins | v1.33.1 | 20 May 24 03:31 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-840000                | cert-options-840000       | jenkins | v1.33.1 | 20 May 24 03:31 PDT | 20 May 24 03:31 PDT |
	| start   | -p running-upgrade-908000             | minikube                  | jenkins | v1.26.0 | 20 May 24 03:31 PDT | 20 May 24 03:32 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-908000             | running-upgrade-908000    | jenkins | v1.33.1 | 20 May 24 03:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-708000             | cert-expiration-708000    | jenkins | v1.33.1 | 20 May 24 03:34 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-708000             | cert-expiration-708000    | jenkins | v1.33.1 | 20 May 24 03:34 PDT | 20 May 24 03:34 PDT |
	| start   | -p kubernetes-upgrade-008000          | kubernetes-upgrade-008000 | jenkins | v1.33.1 | 20 May 24 03:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-008000          | kubernetes-upgrade-008000 | jenkins | v1.33.1 | 20 May 24 03:34 PDT | 20 May 24 03:34 PDT |
	| start   | -p kubernetes-upgrade-008000          | kubernetes-upgrade-008000 | jenkins | v1.33.1 | 20 May 24 03:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-008000          | kubernetes-upgrade-008000 | jenkins | v1.33.1 | 20 May 24 03:34 PDT | 20 May 24 03:34 PDT |
	| start   | -p stopped-upgrade-555000             | minikube                  | jenkins | v1.26.0 | 20 May 24 03:34 PDT | 20 May 24 03:35 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-555000 stop           | minikube                  | jenkins | v1.26.0 | 20 May 24 03:35 PDT | 20 May 24 03:35 PDT |
	| start   | -p stopped-upgrade-555000             | stopped-upgrade-555000    | jenkins | v1.33.1 | 20 May 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:35:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:35:21.003210    7819 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:35:21.003335    7819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:35:21.003342    7819 out.go:304] Setting ErrFile to fd 2...
	I0520 03:35:21.003344    7819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:35:21.003464    7819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:35:21.004453    7819 out.go:298] Setting JSON to false
	I0520 03:35:21.021683    7819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5692,"bootTime":1716195629,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:35:21.021756    7819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:35:21.026450    7819 out.go:177] * [stopped-upgrade-555000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:35:21.034374    7819 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:35:21.037465    7819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:35:21.034457    7819 notify.go:220] Checking for updates...
	I0520 03:35:21.043455    7819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:35:21.046435    7819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:35:21.047794    7819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:35:21.050431    7819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:35:21.053700    7819 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:35:21.057413    7819 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 03:35:21.060353    7819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:35:21.064361    7819 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:35:21.071418    7819 start.go:297] selected driver: qemu2
	I0520 03:35:21.071425    7819 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:35:21.071493    7819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:35:21.074019    7819 cni.go:84] Creating CNI manager for ""
	I0520 03:35:21.074035    7819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:35:21.074059    7819 start.go:340] cluster config:
	{Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:35:21.074113    7819 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:35:21.080284    7819 out.go:177] * Starting "stopped-upgrade-555000" primary control-plane node in "stopped-upgrade-555000" cluster
	I0520 03:35:21.084395    7819 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 03:35:21.084412    7819 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 03:35:21.084423    7819 cache.go:56] Caching tarball of preloaded images
	I0520 03:35:21.084484    7819 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:35:21.084491    7819 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 03:35:21.084548    7819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/config.json ...
	I0520 03:35:21.084932    7819 start.go:360] acquireMachinesLock for stopped-upgrade-555000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:35:21.084960    7819 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "stopped-upgrade-555000"
	I0520 03:35:21.084970    7819 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:35:21.084977    7819 fix.go:54] fixHost starting: 
	I0520 03:35:21.085090    7819 fix.go:112] recreateIfNeeded on stopped-upgrade-555000: state=Stopped err=<nil>
	W0520 03:35:21.085098    7819 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:35:21.089374    7819 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-555000" ...
	I0520 03:35:20.464784    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:20.464969    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:20.484457    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:20.484546    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:20.502183    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:20.502251    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:20.513597    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:20.513666    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:20.524615    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:20.524678    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:20.535080    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:20.535145    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:20.545384    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:20.545444    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:20.555677    7666 logs.go:276] 0 containers: []
	W0520 03:35:20.555691    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:20.555750    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:20.566845    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:20.566864    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:20.566869    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:20.578424    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:20.578435    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:20.589556    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:20.589567    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:20.600979    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:20.600991    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:20.613312    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:20.613323    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:20.617979    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:20.617987    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:20.653035    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:20.653049    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:20.668934    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:20.668944    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:20.693792    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:20.693804    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:20.730364    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:20.730371    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:20.742767    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:20.742778    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:20.765986    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:20.765992    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:20.779534    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:20.779546    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:20.804744    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:20.804753    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:20.818456    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:20.818469    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:20.832766    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:20.832777    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:21.097500    7819 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51268-:22,hostfwd=tcp::51269-:2376,hostname=stopped-upgrade-555000 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/disk.qcow2
	I0520 03:35:21.144629    7819 main.go:141] libmachine: STDOUT: 
	I0520 03:35:21.144665    7819 main.go:141] libmachine: STDERR: 
	I0520 03:35:21.144672    7819 main.go:141] libmachine: Waiting for VM to start (ssh -p 51268 docker@127.0.0.1)...
	I0520 03:35:23.345884    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:28.348485    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:28.348695    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:28.360837    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:28.360914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:28.371958    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:28.372033    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:28.382806    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:28.382875    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:28.393812    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:28.393879    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:28.404451    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:28.404509    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:28.415380    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:28.415453    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:28.425082    7666 logs.go:276] 0 containers: []
	W0520 03:35:28.425097    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:28.425152    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:28.435925    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:28.435942    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:28.435948    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:28.440213    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:28.440221    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:28.454887    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:28.454897    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:28.469319    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:28.469330    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:28.486890    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:28.486901    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:28.511001    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:28.511008    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:28.547600    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:28.547613    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:28.561350    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:28.561361    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:28.572443    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:28.572455    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:28.588230    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:28.588240    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:28.623830    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:28.623846    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:28.639900    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:28.639912    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:28.652262    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:28.652271    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:28.665021    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:28.665030    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:28.679711    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:28.679722    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:28.691814    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:28.691825    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:31.217546    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:36.219893    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:36.220353    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:36.260041    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:36.260179    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:36.282149    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:36.282269    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:36.297270    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:36.297344    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:36.310296    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:36.310371    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:36.321075    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:36.321145    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:36.332028    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:36.332096    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:36.342373    7666 logs.go:276] 0 containers: []
	W0520 03:35:36.342387    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:36.342441    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:36.354182    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:36.354197    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:36.354202    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:36.368484    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:36.368497    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:36.385625    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:36.385638    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:36.423222    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:36.423232    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:36.442468    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:36.442477    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:36.460060    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:36.460071    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:36.471984    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:36.471996    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:36.484270    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:36.484282    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:36.495779    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:36.495790    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:36.507962    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:36.507970    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:36.519316    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:36.519326    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:36.553650    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:36.553656    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:36.558197    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:36.558204    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:36.570207    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:36.570221    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:36.594691    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:36.594699    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:36.608597    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:36.608607    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:39.133374    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:41.067104    7819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/config.json ...
	I0520 03:35:41.067753    7819 machine.go:94] provisionDockerMachine start ...
	I0520 03:35:41.067906    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.068401    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.068415    7819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 03:35:41.153210    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 03:35:41.153243    7819 buildroot.go:166] provisioning hostname "stopped-upgrade-555000"
	I0520 03:35:41.153370    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.153617    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.153632    7819 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-555000 && echo "stopped-upgrade-555000" | sudo tee /etc/hostname
	I0520 03:35:41.228435    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-555000
	
	I0520 03:35:41.228517    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.228672    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.228680    7819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-555000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-555000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-555000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 03:35:41.294517    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:35:41.294528    7819 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18925-5286/.minikube CaCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18925-5286/.minikube}
	I0520 03:35:41.294535    7819 buildroot.go:174] setting up certificates
	I0520 03:35:41.294539    7819 provision.go:84] configureAuth start
	I0520 03:35:41.294547    7819 provision.go:143] copyHostCerts
	I0520 03:35:41.294624    7819 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem, removing ...
	I0520 03:35:41.294633    7819 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem
	I0520 03:35:41.294729    7819 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem (1078 bytes)
	I0520 03:35:41.294906    7819 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem, removing ...
	I0520 03:35:41.294910    7819 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem
	I0520 03:35:41.294958    7819 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem (1123 bytes)
	I0520 03:35:41.295057    7819 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem, removing ...
	I0520 03:35:41.295061    7819 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem
	I0520 03:35:41.295104    7819 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem (1675 bytes)
	I0520 03:35:41.295184    7819 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-555000 san=[127.0.0.1 localhost minikube stopped-upgrade-555000]
	I0520 03:35:41.401606    7819 provision.go:177] copyRemoteCerts
	I0520 03:35:41.401643    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 03:35:41.401650    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:35:41.436524    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 03:35:41.443123    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 03:35:41.450353    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 03:35:41.457649    7819 provision.go:87] duration metric: took 163.108875ms to configureAuth
	I0520 03:35:41.457658    7819 buildroot.go:189] setting minikube options for container-runtime
	I0520 03:35:41.457757    7819 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:35:41.457795    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.457880    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.457885    7819 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 03:35:41.520430    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 03:35:41.520438    7819 buildroot.go:70] root file system type: tmpfs
	I0520 03:35:41.520490    7819 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 03:35:41.520534    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.520640    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.520673    7819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 03:35:41.587266    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 03:35:41.587321    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.587430    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.587438    7819 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 03:35:41.955373    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 03:35:41.955385    7819 machine.go:97] duration metric: took 887.639125ms to provisionDockerMachine
	I0520 03:35:41.955392    7819 start.go:293] postStartSetup for "stopped-upgrade-555000" (driver="qemu2")
	I0520 03:35:41.955398    7819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 03:35:41.955462    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 03:35:41.955471    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:35:41.988503    7819 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 03:35:41.989727    7819 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 03:35:41.989735    7819 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18925-5286/.minikube/addons for local assets ...
	I0520 03:35:41.989822    7819 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18925-5286/.minikube/files for local assets ...
	I0520 03:35:41.989946    7819 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem -> 58182.pem in /etc/ssl/certs
	I0520 03:35:41.990079    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 03:35:41.993415    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem --> /etc/ssl/certs/58182.pem (1708 bytes)
	I0520 03:35:42.001539    7819 start.go:296] duration metric: took 46.139834ms for postStartSetup
	I0520 03:35:42.001560    7819 fix.go:56] duration metric: took 20.916973708s for fixHost
	I0520 03:35:42.001611    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:42.001743    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:42.001747    7819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 03:35:42.068295    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201341.863333879
	
	I0520 03:35:42.068304    7819 fix.go:216] guest clock: 1716201341.863333879
	I0520 03:35:42.068308    7819 fix.go:229] Guest: 2024-05-20 03:35:41.863333879 -0700 PDT Remote: 2024-05-20 03:35:42.001562 -0700 PDT m=+21.018680417 (delta=-138.228121ms)
	I0520 03:35:42.068319    7819 fix.go:200] guest clock delta is within tolerance: -138.228121ms
	I0520 03:35:42.068322    7819 start.go:83] releasing machines lock for "stopped-upgrade-555000", held for 20.983748042s
	I0520 03:35:42.068387    7819 ssh_runner.go:195] Run: cat /version.json
	I0520 03:35:42.068397    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:35:42.068387    7819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 03:35:42.068486    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	W0520 03:35:42.068951    7819 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51268: connect: connection refused
	I0520 03:35:42.068973    7819 retry.go:31] will retry after 196.23682ms: dial tcp [::1]:51268: connect: connection refused
	W0520 03:35:42.304536    7819 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 03:35:42.304642    7819 ssh_runner.go:195] Run: systemctl --version
	I0520 03:35:42.307315    7819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 03:35:42.309771    7819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 03:35:42.309810    7819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 03:35:42.313577    7819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 03:35:42.319091    7819 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 03:35:42.319099    7819 start.go:494] detecting cgroup driver to use...
	I0520 03:35:42.319189    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:35:42.326818    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 03:35:42.330606    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 03:35:42.333920    7819 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 03:35:42.333950    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 03:35:42.337026    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:35:42.339687    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 03:35:42.342619    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:35:42.345817    7819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 03:35:42.348717    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 03:35:42.351569    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 03:35:42.354647    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 03:35:42.357967    7819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 03:35:42.360849    7819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 03:35:42.363343    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:42.440627    7819 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 03:35:42.446615    7819 start.go:494] detecting cgroup driver to use...
	I0520 03:35:42.446678    7819 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 03:35:42.455762    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:35:42.460647    7819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 03:35:42.470243    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:35:42.474841    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:35:42.479645    7819 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 03:35:42.518157    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:35:42.523547    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:35:42.529052    7819 ssh_runner.go:195] Run: which cri-dockerd
	I0520 03:35:42.530445    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 03:35:42.533492    7819 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 03:35:42.538504    7819 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 03:35:42.622726    7819 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 03:35:42.691906    7819 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 03:35:42.691982    7819 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 03:35:42.696938    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:42.773238    7819 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:35:43.948429    7819 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.175197s)
	I0520 03:35:43.948487    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 03:35:43.953358    7819 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 03:35:43.960630    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:35:43.965128    7819 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 03:35:44.044529    7819 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 03:35:44.119431    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:44.199341    7819 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 03:35:44.205895    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:35:44.211025    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:44.291758    7819 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 03:35:44.335643    7819 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 03:35:44.335723    7819 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 03:35:44.338032    7819 start.go:562] Will wait 60s for crictl version
	I0520 03:35:44.338073    7819 ssh_runner.go:195] Run: which crictl
	I0520 03:35:44.340245    7819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 03:35:44.357073    7819 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 03:35:44.357147    7819 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:35:44.376488    7819 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:35:44.397725    7819 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 03:35:44.397845    7819 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 03:35:44.399322    7819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:35:44.403154    7819 kubeadm.go:877] updating cluster {Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 03:35:44.403212    7819 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 03:35:44.403258    7819 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:35:44.414970    7819 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:35:44.414980    7819 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 03:35:44.415022    7819 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:35:44.418736    7819 ssh_runner.go:195] Run: which lz4
	I0520 03:35:44.420198    7819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 03:35:44.421373    7819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 03:35:44.421384    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 03:35:45.146021    7819 docker.go:649] duration metric: took 725.851792ms to copy over tarball
	I0520 03:35:45.146080    7819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 03:35:44.135847    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:44.135982    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:44.146628    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:44.146698    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:44.157240    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:44.157333    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:44.168673    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:44.168739    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:44.179145    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:44.179211    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:44.189655    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:44.189723    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:44.200624    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:44.200679    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:44.211806    7666 logs.go:276] 0 containers: []
	W0520 03:35:44.211815    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:44.211858    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:44.223202    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:44.223226    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:44.223231    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:44.259492    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:44.259511    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:44.272030    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:44.272042    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:44.276495    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:44.276503    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:44.290833    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:44.290845    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:44.303326    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:44.303342    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:44.317855    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:44.317869    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:44.355483    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:44.355503    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:44.372952    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:44.372965    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:44.393153    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:44.393162    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:44.405929    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:44.405940    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:44.431821    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:44.431832    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:44.456249    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:44.456264    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:44.475679    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:44.475693    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:44.501635    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:44.501648    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:44.514702    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:44.514716    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:46.336944    7819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.190867541s)
	I0520 03:35:46.336957    7819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 03:35:46.352313    7819 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:35:46.355344    7819 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 03:35:46.360516    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:46.445583    7819 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:35:48.052559    7819 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.606988792s)
	I0520 03:35:48.052678    7819 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:35:48.064599    7819 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:35:48.064608    7819 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 03:35:48.064614    7819 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 03:35:48.070332    7819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.070345    7819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:48.070381    7819 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 03:35:48.070430    7819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.070490    7819 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.070513    7819 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.070491    7819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.070530    7819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.077786    7819 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 03:35:48.077911    7819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.077972    7819 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.078619    7819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.078623    7819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.078731    7819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.078778    7819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.078814    7819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:48.462707    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0520 03:35:48.469698    7819 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 03:35:48.469822    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.475973    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 03:35:48.484683    7819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 03:35:48.484712    7819 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.484766    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.485978    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.497785    7819 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 03:35:48.497804    7819 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 03:35:48.497785    7819 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 03:35:48.497857    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 03:35:48.497880    7819 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.497983    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.503719    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.509636    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 03:35:48.509674    7819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 03:35:48.509692    7819 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.509744    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.515250    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.530262    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 03:35:48.530379    7819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0520 03:35:48.531523    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 03:35:48.531600    7819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0520 03:35:48.534590    7819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 03:35:48.534602    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 03:35:48.534606    7819 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.534653    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.540711    7819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 03:35:48.540732    7819 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.540741    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 03:35:48.540760    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 03:35:48.540786    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.540732    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 03:35:48.540850    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 03:35:48.553625    7819 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 03:35:48.553643    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 03:35:48.556174    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 03:35:48.558695    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.593731    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 03:35:48.614818    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 03:35:48.614841    7819 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 03:35:48.614860    7819 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.614844    7819 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 03:35:48.614915    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.614915    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 03:35:48.660528    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 03:35:48.660565    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 03:35:48.660673    7819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 03:35:48.662038    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 03:35:48.662046    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 03:35:48.818879    7819 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 03:35:48.818906    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0520 03:35:48.946186    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0520 03:35:48.996135    7819 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 03:35:48.996245    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:49.007368    7819 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 03:35:49.007390    7819 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:49.007443    7819 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:49.021798    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 03:35:49.021915    7819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 03:35:49.023583    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 03:35:49.023607    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 03:35:49.054837    7819 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 03:35:49.054852    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 03:35:49.281179    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 03:35:49.281226    7819 cache_images.go:92] duration metric: took 1.216625375s to LoadCachedImages
	W0520 03:35:49.281263    7819 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0520 03:35:49.281270    7819 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 03:35:49.281323    7819 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-555000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 03:35:49.281387    7819 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 03:35:49.296102    7819 cni.go:84] Creating CNI manager for ""
	I0520 03:35:49.296115    7819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:35:49.296122    7819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 03:35:49.296130    7819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-555000 NodeName:stopped-upgrade-555000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 03:35:49.296194    7819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-555000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 03:35:49.296245    7819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 03:35:49.299067    7819 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 03:35:49.299101    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 03:35:49.302210    7819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 03:35:49.307259    7819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 03:35:49.312117    7819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 03:35:49.317414    7819 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 03:35:49.318607    7819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:35:49.322187    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:49.400267    7819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:35:49.405464    7819 certs.go:68] Setting up /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000 for IP: 10.0.2.15
	I0520 03:35:49.405473    7819 certs.go:194] generating shared ca certs ...
	I0520 03:35:49.405482    7819 certs.go:226] acquiring lock for ca certs: {Name:mk32e3e05b22049132d2a360697fa20a693ff13f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.405652    7819 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.key
	I0520 03:35:49.405705    7819 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.key
	I0520 03:35:49.405711    7819 certs.go:256] generating profile certs ...
	I0520 03:35:49.405782    7819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.key
	I0520 03:35:49.405798    7819 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f
	I0520 03:35:49.405809    7819 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 03:35:49.477219    7819 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f ...
	I0520 03:35:49.477233    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f: {Name:mkdda6f3ad96fcf46ee377b38b4e95938eea1041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.477565    7819 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f ...
	I0520 03:35:49.477574    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f: {Name:mk7ebeba82b864cfb00ad2530e5f8c957755d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.477704    7819 certs.go:381] copying /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f -> /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt
	I0520 03:35:49.477832    7819 certs.go:385] copying /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f -> /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key
	I0520 03:35:49.477977    7819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/proxy-client.key
	I0520 03:35:49.478108    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818.pem (1338 bytes)
	W0520 03:35:49.478138    7819 certs.go:480] ignoring /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818_empty.pem, impossibly tiny 0 bytes
	I0520 03:35:49.478152    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 03:35:49.478176    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem (1078 bytes)
	I0520 03:35:49.478195    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem (1123 bytes)
	I0520 03:35:49.478216    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem (1675 bytes)
	I0520 03:35:49.478258    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem (1708 bytes)
	I0520 03:35:49.478581    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 03:35:49.485833    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 03:35:49.493103    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 03:35:49.499921    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 03:35:49.506628    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 03:35:49.514155    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 03:35:49.521465    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 03:35:49.528466    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 03:35:49.535268    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818.pem --> /usr/share/ca-certificates/5818.pem (1338 bytes)
	I0520 03:35:49.542106    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem --> /usr/share/ca-certificates/58182.pem (1708 bytes)
	I0520 03:35:49.549434    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 03:35:49.556109    7819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 03:35:49.561195    7819 ssh_runner.go:195] Run: openssl version
	I0520 03:35:49.563383    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5818.pem && ln -fs /usr/share/ca-certificates/5818.pem /etc/ssl/certs/5818.pem"
	I0520 03:35:49.566533    7819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5818.pem
	I0520 03:35:49.567969    7819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:19 /usr/share/ca-certificates/5818.pem
	I0520 03:35:49.567994    7819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5818.pem
	I0520 03:35:49.569650    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5818.pem /etc/ssl/certs/51391683.0"
	I0520 03:35:49.572508    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58182.pem && ln -fs /usr/share/ca-certificates/58182.pem /etc/ssl/certs/58182.pem"
	I0520 03:35:49.575249    7819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58182.pem
	I0520 03:35:49.576673    7819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:19 /usr/share/ca-certificates/58182.pem
	I0520 03:35:49.576691    7819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58182.pem
	I0520 03:35:49.578451    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/58182.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 03:35:49.581880    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 03:35:49.584958    7819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:35:49.586397    7819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:35:49.586418    7819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:35:49.588289    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 03:35:49.591030    7819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 03:35:49.592527    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 03:35:49.594625    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 03:35:49.596720    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 03:35:49.598783    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 03:35:49.600957    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 03:35:49.602986    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 03:35:49.605065    7819 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:35:49.605136    7819 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:35:49.615516    7819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 03:35:49.618854    7819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 03:35:49.618860    7819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 03:35:49.618863    7819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 03:35:49.618884    7819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 03:35:49.621830    7819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:35:49.622137    7819 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-555000" does not appear in /Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:35:49.622230    7819 kubeconfig.go:62] /Users/jenkins/minikube-integration/18925-5286/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-555000" cluster setting kubeconfig missing "stopped-upgrade-555000" context setting]
	I0520 03:35:49.622417    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/kubeconfig: {Name:mk2c3e0adb489a0347b499d6142b492dee1b48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.622845    7819 kapi.go:59] client config for stopped-upgrade-555000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.key", CAFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ea0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:35:49.623271    7819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 03:35:49.625973    7819 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-555000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 03:35:49.625978    7819 kubeadm.go:1154] stopping kube-system containers ...
	I0520 03:35:49.626016    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:35:49.636733    7819 docker.go:483] Stopping containers: [6474d3cde87b 27d8c3bf7a0b d8573923fa37 30a7c27597f7 e20d655db97e 137ba8a1eae4 642180187ce6 7ba00f49f8cf]
	I0520 03:35:49.636800    7819 ssh_runner.go:195] Run: docker stop 6474d3cde87b 27d8c3bf7a0b d8573923fa37 30a7c27597f7 e20d655db97e 137ba8a1eae4 642180187ce6 7ba00f49f8cf
	I0520 03:35:49.647100    7819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 03:35:49.652906    7819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:35:49.655586    7819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 03:35:49.655592    7819 kubeadm.go:156] found existing configuration files:
	
	I0520 03:35:49.655616    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf
	I0520 03:35:49.658340    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 03:35:49.658365    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:35:49.661447    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf
	I0520 03:35:49.664049    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 03:35:49.664071    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:35:49.666571    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf
	I0520 03:35:49.669545    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 03:35:49.669565    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:35:49.672089    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf
	I0520 03:35:49.674563    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 03:35:49.674588    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:35:49.677584    7819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:35:49.680482    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:49.701467    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.281461    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.411335    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.432897    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.453414    7819 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:35:50.453499    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:35:50.955567    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:35:47.030448    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:51.455427    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:35:51.459563    7819 api_server.go:72] duration metric: took 1.006167792s to wait for apiserver process to appear ...
	I0520 03:35:51.459572    7819 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:35:51.459582    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:52.032951    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:52.033061    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:52.047851    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:52.047929    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:52.059330    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:52.059409    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:52.070145    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:52.070219    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:35:52.081652    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:35:52.081718    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:35:52.092279    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:35:52.092346    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:35:52.102873    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:35:52.102934    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:35:52.113094    7666 logs.go:276] 0 containers: []
	W0520 03:35:52.113104    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:35:52.113163    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:35:52.124261    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:35:52.124278    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:35:52.124284    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:35:52.160082    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:35:52.160093    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:35:52.175913    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:35:52.175922    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:35:52.198932    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:35:52.198944    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:35:52.210818    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:35:52.210829    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:35:52.215258    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:35:52.215266    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:35:52.229684    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:35:52.229695    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:35:52.254660    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:35:52.254670    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:35:52.272675    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:35:52.272691    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:35:52.310380    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:35:52.310391    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:35:52.324794    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:35:52.324805    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:35:52.343073    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:35:52.343083    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:35:52.356631    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:35:52.356641    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:35:52.373313    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:35:52.373324    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:35:52.384527    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:35:52.384538    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:35:52.397343    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:35:52.397355    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:35:54.911528    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:56.460346    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:56.460395    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:59.914210    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:59.914608    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:35:59.951176    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:35:59.951355    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:35:59.973046    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:35:59.973146    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:35:59.988229    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:35:59.988306    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:00.000326    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:36:00.000403    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:00.010699    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:36:00.010764    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:00.021709    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:36:00.021777    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:00.033977    7666 logs.go:276] 0 containers: []
	W0520 03:36:00.033992    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:00.034057    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:00.044533    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:36:00.044549    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:36:00.044555    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:36:00.056623    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:36:00.056634    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:36:00.067571    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:36:00.067580    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:36:00.090814    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:36:00.090826    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:36:00.104597    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:36:00.104609    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:36:00.116239    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:36:00.116251    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:36:00.128111    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:36:00.128122    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:36:00.145869    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:36:00.145880    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:36:00.157413    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:00.157423    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:00.180787    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:36:00.180798    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:00.193884    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:00.193896    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:00.229276    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:00.229285    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:00.264458    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:36:00.264470    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:36:00.282220    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:36:00.282231    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:36:00.303355    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:00.303365    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:00.307442    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:36:00.307449    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:36:01.461455    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:01.461484    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:02.824950    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:06.461616    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:06.461704    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:07.827130    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:07.827318    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:36:07.838364    7666 logs.go:276] 2 containers: [ce87b437ad41 e272c0fad9b3]
	I0520 03:36:07.838431    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:36:07.850044    7666 logs.go:276] 2 containers: [b180f9ae6308 50dc532e6232]
	I0520 03:36:07.850122    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:36:07.866625    7666 logs.go:276] 1 containers: [31c80af70857]
	I0520 03:36:07.866698    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:07.877685    7666 logs.go:276] 2 containers: [68dd728d5462 cab878002565]
	I0520 03:36:07.877775    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:07.888594    7666 logs.go:276] 1 containers: [96765dfda96d]
	I0520 03:36:07.888670    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:07.900769    7666 logs.go:276] 2 containers: [dfc1234f899d dc341bfdb38e]
	I0520 03:36:07.900844    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:07.910948    7666 logs.go:276] 0 containers: []
	W0520 03:36:07.910959    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:07.911019    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:07.922006    7666 logs.go:276] 1 containers: [a0a63a14b225]
	I0520 03:36:07.922023    7666 logs.go:123] Gathering logs for kube-apiserver [e272c0fad9b3] ...
	I0520 03:36:07.922040    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e272c0fad9b3"
	I0520 03:36:07.955022    7666 logs.go:123] Gathering logs for etcd [b180f9ae6308] ...
	I0520 03:36:07.955037    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b180f9ae6308"
	I0520 03:36:07.969481    7666 logs.go:123] Gathering logs for coredns [31c80af70857] ...
	I0520 03:36:07.969495    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31c80af70857"
	I0520 03:36:07.981287    7666 logs.go:123] Gathering logs for kube-scheduler [cab878002565] ...
	I0520 03:36:07.981299    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab878002565"
	I0520 03:36:07.998463    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:07.998476    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:08.023364    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:08.023379    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:08.059246    7666 logs.go:123] Gathering logs for kube-apiserver [ce87b437ad41] ...
	I0520 03:36:08.059260    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce87b437ad41"
	I0520 03:36:08.074184    7666 logs.go:123] Gathering logs for kube-controller-manager [dfc1234f899d] ...
	I0520 03:36:08.074195    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfc1234f899d"
	I0520 03:36:08.093741    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:36:08.093753    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:08.105482    7666 logs.go:123] Gathering logs for kube-scheduler [68dd728d5462] ...
	I0520 03:36:08.105500    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68dd728d5462"
	I0520 03:36:08.118115    7666 logs.go:123] Gathering logs for kube-controller-manager [dc341bfdb38e] ...
	I0520 03:36:08.118128    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc341bfdb38e"
	I0520 03:36:08.129815    7666 logs.go:123] Gathering logs for storage-provisioner [a0a63a14b225] ...
	I0520 03:36:08.129827    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0a63a14b225"
	I0520 03:36:08.141582    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:08.141596    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:08.178278    7666 logs.go:123] Gathering logs for etcd [50dc532e6232] ...
	I0520 03:36:08.178290    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dc532e6232"
	I0520 03:36:08.194241    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:08.194252    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:08.198908    7666 logs.go:123] Gathering logs for kube-proxy [96765dfda96d] ...
	I0520 03:36:08.198916    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96765dfda96d"
	I0520 03:36:10.715071    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:11.462133    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:11.462181    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:15.717311    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:15.717381    7666 kubeadm.go:591] duration metric: took 4m3.734372042s to restartPrimaryControlPlane
	W0520 03:36:15.717451    7666 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 03:36:15.717481    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 03:36:16.704955    7666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 03:36:16.709871    7666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:36:16.712924    7666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:36:16.716141    7666 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 03:36:16.716148    7666 kubeadm.go:156] found existing configuration files:
	
	I0520 03:36:16.716204    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/admin.conf
	I0520 03:36:16.718651    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 03:36:16.718672    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:36:16.721697    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/kubelet.conf
	I0520 03:36:16.724859    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 03:36:16.724882    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:36:16.727466    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/controller-manager.conf
	I0520 03:36:16.730119    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 03:36:16.730137    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:36:16.733452    7666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/scheduler.conf
	I0520 03:36:16.736426    7666 kubeadm.go:162] "https://control-plane.minikube.internal:51080" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51080 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 03:36:16.736449    7666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:36:16.739114    7666 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 03:36:16.755945    7666 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 03:36:16.755973    7666 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 03:36:16.803753    7666 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 03:36:16.803804    7666 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 03:36:16.803851    7666 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 03:36:16.852069    7666 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 03:36:16.860336    7666 out.go:204]   - Generating certificates and keys ...
	I0520 03:36:16.860369    7666 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 03:36:16.860410    7666 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 03:36:16.860447    7666 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 03:36:16.860478    7666 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 03:36:16.860509    7666 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 03:36:16.860552    7666 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 03:36:16.860587    7666 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 03:36:16.860620    7666 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 03:36:16.860657    7666 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 03:36:16.860696    7666 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 03:36:16.860718    7666 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 03:36:16.860745    7666 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 03:36:16.925027    7666 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 03:36:16.977093    7666 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 03:36:17.381800    7666 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 03:36:17.443869    7666 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 03:36:17.473911    7666 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 03:36:17.475005    7666 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 03:36:17.475076    7666 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 03:36:17.561370    7666 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 03:36:16.462939    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:16.462965    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:17.565548    7666 out.go:204]   - Booting up control plane ...
	I0520 03:36:17.565595    7666 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 03:36:17.565644    7666 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 03:36:17.565693    7666 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 03:36:17.565778    7666 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 03:36:17.565860    7666 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 03:36:22.066282    7666 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501677 seconds
	I0520 03:36:22.066343    7666 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 03:36:22.070671    7666 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 03:36:22.578963    7666 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 03:36:22.579087    7666 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-908000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 03:36:23.082344    7666 kubeadm.go:309] [bootstrap-token] Using token: mz6kye.uxn8tzbb1tomjbed
	I0520 03:36:23.087228    7666 out.go:204]   - Configuring RBAC rules ...
	I0520 03:36:23.087298    7666 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 03:36:23.087346    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 03:36:23.088989    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 03:36:23.093744    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 03:36:23.094840    7666 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 03:36:23.095799    7666 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 03:36:23.098997    7666 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 03:36:23.280309    7666 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 03:36:23.487545    7666 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 03:36:23.487987    7666 kubeadm.go:309] 
	I0520 03:36:23.488015    7666 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 03:36:23.488024    7666 kubeadm.go:309] 
	I0520 03:36:23.488062    7666 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 03:36:23.488065    7666 kubeadm.go:309] 
	I0520 03:36:23.488081    7666 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 03:36:23.488112    7666 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 03:36:23.488154    7666 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 03:36:23.488162    7666 kubeadm.go:309] 
	I0520 03:36:23.488200    7666 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 03:36:23.488202    7666 kubeadm.go:309] 
	I0520 03:36:23.488224    7666 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 03:36:23.488228    7666 kubeadm.go:309] 
	I0520 03:36:23.488254    7666 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 03:36:23.488293    7666 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 03:36:23.488343    7666 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 03:36:23.488348    7666 kubeadm.go:309] 
	I0520 03:36:23.488396    7666 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 03:36:23.488442    7666 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 03:36:23.488445    7666 kubeadm.go:309] 
	I0520 03:36:23.488505    7666 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token mz6kye.uxn8tzbb1tomjbed \
	I0520 03:36:23.488559    7666 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 \
	I0520 03:36:23.488569    7666 kubeadm.go:309] 	--control-plane 
	I0520 03:36:23.488571    7666 kubeadm.go:309] 
	I0520 03:36:23.488609    7666 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 03:36:23.488613    7666 kubeadm.go:309] 
	I0520 03:36:23.488648    7666 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token mz6kye.uxn8tzbb1tomjbed \
	I0520 03:36:23.488701    7666 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 
	I0520 03:36:23.488758    7666 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 03:36:23.488845    7666 cni.go:84] Creating CNI manager for ""
	I0520 03:36:23.488855    7666 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:36:23.497108    7666 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 03:36:23.501267    7666 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 03:36:23.505118    7666 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 03:36:23.510053    7666 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 03:36:23.510109    7666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:36:23.510118    7666 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-908000 minikube.k8s.io/updated_at=2024_05_20T03_36_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=running-upgrade-908000 minikube.k8s.io/primary=true
	I0520 03:36:23.551313    7666 kubeadm.go:1107] duration metric: took 41.249417ms to wait for elevateKubeSystemPrivileges
	I0520 03:36:23.551319    7666 ops.go:34] apiserver oom_adj: -16
	W0520 03:36:23.551420    7666 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 03:36:23.551425    7666 kubeadm.go:393] duration metric: took 4m11.582283292s to StartCluster
	I0520 03:36:23.551434    7666 settings.go:142] acquiring lock: {Name:mkc3af27fbea4a81f456d1d023b17ad3b4bc78ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:36:23.551609    7666 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:36:23.551989    7666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/kubeconfig: {Name:mk2c3e0adb489a0347b499d6142b492dee1b48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:36:23.552211    7666 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:36:23.554203    7666 out.go:177] * Verifying Kubernetes components...
	I0520 03:36:23.552222    7666 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 03:36:23.552283    7666 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:36:23.562318    7666 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-908000"
	I0520 03:36:23.562318    7666 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-908000"
	I0520 03:36:23.562337    7666 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-908000"
	W0520 03:36:23.562342    7666 addons.go:243] addon storage-provisioner should already be in state true
	I0520 03:36:23.562353    7666 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-908000"
	I0520 03:36:23.562330    7666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:36:23.562364    7666 host.go:66] Checking if "running-upgrade-908000" exists ...
	I0520 03:36:23.563361    7666 kapi.go:59] client config for running-upgrade-908000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/running-upgrade-908000/client.key", CAFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1026fc580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:36:23.563484    7666 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-908000"
	W0520 03:36:23.563488    7666 addons.go:243] addon default-storageclass should already be in state true
	I0520 03:36:23.563495    7666 host.go:66] Checking if "running-upgrade-908000" exists ...
	I0520 03:36:23.568223    7666 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:36:21.463547    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:21.463611    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:23.571280    7666 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:36:23.571286    7666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 03:36:23.571292    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0520 03:36:23.571850    7666 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 03:36:23.571856    7666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 03:36:23.571860    7666 sshutil.go:53] new ssh client: &{IP:localhost Port:51048 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/running-upgrade-908000/id_rsa Username:docker}
	I0520 03:36:23.657095    7666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:36:23.661827    7666 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:36:23.661871    7666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:36:23.666060    7666 api_server.go:72] duration metric: took 113.841833ms to wait for apiserver process to appear ...
	I0520 03:36:23.666068    7666 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:36:23.666074    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:23.729172    7666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:36:23.729794    7666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 03:36:26.464607    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:26.464666    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:28.667655    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:28.667741    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:31.465980    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:31.466021    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:33.668282    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:33.668303    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:36.467499    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:36.467562    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:38.668506    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:38.668534    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:41.467893    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:41.467935    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:43.669288    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:43.669328    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:46.470129    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:46.470209    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:48.669909    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:48.669934    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:53.670650    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:53.670683    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 03:36:54.079027    7666 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 03:36:54.083355    7666 out.go:177] * Enabled addons: storage-provisioner
	I0520 03:36:51.472648    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:51.472836    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:36:51.486716    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:36:51.486801    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:36:51.497884    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:36:51.497967    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:36:51.508229    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:36:51.508302    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:51.519625    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:36:51.519709    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:51.530265    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:36:51.530331    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:51.540392    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:36:51.540456    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:51.550397    7819 logs.go:276] 0 containers: []
	W0520 03:36:51.550407    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:51.550465    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:51.560771    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:36:51.560788    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:36:51.560794    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:36:51.581009    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:36:51.581024    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:36:51.592701    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:36:51.592711    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:36:51.604036    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:51.604057    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:51.709058    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:36:51.709070    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:36:51.731223    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:36:51.731236    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:36:51.747295    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:36:51.747308    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:36:51.759377    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:36:51.759391    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:36:51.773682    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:51.773696    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:51.799445    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:51.799453    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:51.837704    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:36:51.837714    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:36:51.851418    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:36:51.851427    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:36:51.877777    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:36:51.877787    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:36:51.889059    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:36:51.889068    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:51.901337    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:51.901348    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:51.905238    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:36:51.905246    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:36:51.921509    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:36:51.921519    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:36:54.441282    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:54.091247    7666 addons.go:505] duration metric: took 30.539599042s for enable addons: enabled=[storage-provisioner]
	I0520 03:36:59.443106    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:59.443348    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:36:59.462133    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:36:59.462224    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:36:59.480143    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:36:59.480211    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:36:59.494248    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:36:59.494323    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:59.504977    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:36:59.505043    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:59.514914    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:36:59.514977    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:59.525989    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:36:59.526064    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:59.536674    7819 logs.go:276] 0 containers: []
	W0520 03:36:59.536685    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:59.536736    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:59.548556    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:36:59.548577    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:36:59.548583    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:36:59.562805    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:36:59.562817    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:36:59.580469    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:36:59.580481    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:59.592443    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:59.592455    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:59.630359    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:36:59.630369    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:36:59.641305    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:36:59.641315    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:36:59.656256    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:59.656266    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:59.660708    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:36:59.660714    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:36:59.672815    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:59.672825    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:59.711851    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:36:59.711862    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:36:59.737076    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:36:59.737087    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:36:59.752047    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:36:59.752059    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:36:59.770023    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:36:59.770033    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:36:59.781448    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:59.781459    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:59.805837    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:36:59.805846    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:36:59.819745    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:36:59.819756    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:36:59.833718    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:36:59.833727    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:36:58.671623    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:58.671663    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:02.349780    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:03.672974    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:03.673014    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:07.352134    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:07.352361    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:07.376061    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:07.376161    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:07.392463    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:07.392542    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:07.404798    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:07.404865    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:07.419451    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:07.419520    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:07.430943    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:07.431016    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:07.449093    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:07.449162    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:07.459177    7819 logs.go:276] 0 containers: []
	W0520 03:37:07.459187    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:07.459244    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:07.472545    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:07.472565    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:07.472570    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:07.497039    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:07.497049    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:07.533198    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:07.533207    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:07.570036    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:07.570046    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:07.585827    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:07.585837    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:07.599258    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:07.599269    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:07.613774    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:07.613783    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:07.627544    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:07.627554    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:07.638978    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:07.638992    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:07.652716    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:07.652724    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:07.677737    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:07.677748    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:07.702374    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:07.702385    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:07.716623    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:07.716632    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:07.721226    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:07.721235    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:07.736874    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:07.736889    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:07.748517    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:07.748527    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:07.759318    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:07.759329    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:10.273848    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:08.674562    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:08.674588    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:15.276145    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:15.276498    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:15.308652    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:15.308784    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:15.330230    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:15.330317    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:15.343267    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:15.343334    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:15.355409    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:15.355484    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:15.366939    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:15.367009    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:15.377739    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:15.377807    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:15.388799    7819 logs.go:276] 0 containers: []
	W0520 03:37:15.388812    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:15.388868    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:15.399284    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:15.399301    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:15.399306    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:15.411269    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:15.411279    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:15.422492    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:15.422502    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:15.436245    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:15.436254    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:15.449348    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:15.449359    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:15.466723    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:15.466734    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:15.484258    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:15.484272    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:15.510224    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:15.510235    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:15.544832    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:15.544843    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:15.559265    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:15.559276    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:15.573716    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:15.573727    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:15.611676    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:15.611683    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:15.625408    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:15.625419    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:15.638119    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:15.638128    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:15.649576    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:15.649588    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:15.653832    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:15.653840    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:15.684669    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:15.684679    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:13.676678    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:13.676708    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:18.201536    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:18.678797    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:18.678836    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:23.203870    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:23.204284    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:23.240794    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:23.240925    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:23.260440    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:23.260540    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:23.276462    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:23.276540    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:23.288712    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:23.288779    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:23.299075    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:23.299143    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:23.310894    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:23.310960    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:23.326132    7819 logs.go:276] 0 containers: []
	W0520 03:37:23.326143    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:23.326201    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:23.336974    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:23.336991    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:23.336996    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:23.375948    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:23.375961    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:23.391006    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:23.391016    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:23.403083    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:23.403094    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:23.427761    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:23.427772    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:23.431782    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:23.431791    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:23.443007    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:23.443017    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:23.458388    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:23.458399    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:23.477296    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:23.477309    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:23.488878    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:23.488890    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:23.501373    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:23.501385    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:23.538942    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:23.538952    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:23.564866    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:23.564880    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:23.578657    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:23.578670    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:23.593233    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:23.593244    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:23.607765    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:23.607776    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:23.619365    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:23.619374    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:23.680941    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:23.681033    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:23.705969    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:23.706041    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:23.723693    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:23.723763    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:23.736318    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:23.736394    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:23.746979    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:23.747053    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:23.759898    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:23.759969    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:23.770618    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:23.770682    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:23.780711    7666 logs.go:276] 0 containers: []
	W0520 03:37:23.780721    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:23.780780    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:23.793030    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:23.793046    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:23.793052    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:23.829304    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:23.829317    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:23.902310    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:23.902326    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:23.916863    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:23.916874    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:23.928605    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:23.928618    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:23.946096    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:23.946110    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:23.951079    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:23.951086    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:23.965964    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:23.965973    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:23.977564    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:23.977579    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:23.997452    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:23.997463    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:24.009384    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:24.009395    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:24.020945    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:24.020957    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:24.045951    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:24.045961    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:26.559436    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:26.133992    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:31.560199    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:31.560331    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:31.579430    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:31.579492    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:31.590235    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:31.590300    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:31.600550    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:31.600613    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:31.611272    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:31.611333    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:31.624221    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:31.624283    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:31.634485    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:31.634555    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:31.644838    7666 logs.go:276] 0 containers: []
	W0520 03:37:31.644850    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:31.644901    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:31.655156    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:31.655175    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:31.655179    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:31.666867    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:31.666878    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:31.685308    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:31.685318    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:31.710358    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:31.710365    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:31.728147    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:31.728157    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:31.739787    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:31.739796    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:31.755423    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:31.755432    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:31.770752    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:31.770762    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:31.782392    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:31.782402    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:31.793671    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:31.793681    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:31.136310    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:31.136558    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:31.164609    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:31.164753    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:31.185244    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:31.185351    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:31.198353    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:31.198440    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:31.210117    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:31.210198    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:31.220723    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:31.220802    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:31.231214    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:31.231285    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:31.241548    7819 logs.go:276] 0 containers: []
	W0520 03:37:31.241568    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:31.241638    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:31.252319    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:31.252338    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:31.252343    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:31.267034    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:31.267044    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:31.282224    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:31.282235    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:31.297511    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:31.297521    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:31.309370    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:31.309380    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:31.321713    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:31.321723    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:31.346239    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:31.346251    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:31.358543    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:31.358553    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:31.374848    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:31.374858    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:31.398409    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:31.398415    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:31.431430    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:31.431445    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:31.445056    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:31.445067    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:31.456588    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:31.456600    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:31.469388    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:31.469401    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:31.483058    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:31.483067    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:31.486994    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:31.487001    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:31.504492    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:31.504501    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:34.045299    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:31.830446    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:31.830468    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:31.835747    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:31.835765    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:31.874636    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:31.874647    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:34.401618    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:39.047528    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:39.047692    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:39.062200    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:39.062278    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:39.074009    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:39.074080    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:39.084469    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:39.084541    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:39.095315    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:39.095388    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:39.110561    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:39.110629    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:39.121069    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:39.121139    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:39.131670    7819 logs.go:276] 0 containers: []
	W0520 03:37:39.131691    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:39.131786    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:39.142674    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:39.142690    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:39.142696    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:39.146625    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:39.146630    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:39.158169    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:39.158178    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:39.176154    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:39.176163    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:39.189985    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:39.189998    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:39.203909    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:39.203918    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:39.219028    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:39.219037    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:39.232948    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:39.232957    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:39.247858    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:39.247867    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:39.284425    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:39.284432    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:39.320656    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:39.320664    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:39.332373    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:39.332385    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:39.344180    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:39.344190    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:39.369157    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:39.369167    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:39.394307    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:39.394318    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:39.405756    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:39.405764    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:39.430950    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:39.430960    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:39.403803    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:39.403888    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:39.415363    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:39.415430    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:39.425845    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:39.425916    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:39.443663    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:39.443729    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:39.455083    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:39.455154    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:39.466385    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:39.466459    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:39.477178    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:39.477250    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:39.488839    7666 logs.go:276] 0 containers: []
	W0520 03:37:39.488854    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:39.488914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:39.500085    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:39.500100    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:39.500105    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:39.511803    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:39.511813    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:39.523409    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:39.523422    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:39.538495    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:39.538504    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:39.559159    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:39.559168    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:39.582349    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:39.582362    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:39.606105    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:39.606120    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:39.643979    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:39.643994    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:39.659206    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:39.659220    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:39.673643    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:39.673656    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:39.685531    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:39.685546    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:39.697342    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:39.697353    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:39.732925    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:39.732932    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:41.958387    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:42.239134    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:46.959524    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:46.959767    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:46.977025    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:46.977104    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:46.990995    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:46.991069    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:47.004804    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:47.004876    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:47.015019    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:47.015088    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:47.026239    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:47.026310    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:47.037103    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:47.037167    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:47.047219    7819 logs.go:276] 0 containers: []
	W0520 03:37:47.047233    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:47.047288    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:47.057657    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:47.057676    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:47.057681    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:47.068816    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:47.068826    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:47.080187    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:47.080197    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:47.097777    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:47.097791    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:47.113045    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:47.113056    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:47.127324    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:47.127333    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:47.141463    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:47.141473    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:47.164376    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:47.164384    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:47.168552    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:47.168558    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:47.202838    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:47.202853    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:47.216905    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:47.216915    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:47.241838    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:47.241844    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:47.256596    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:47.256609    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:47.269674    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:47.269690    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:47.310016    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:47.310027    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:47.326628    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:47.326646    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:47.339340    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:47.339352    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:49.852870    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:47.241298    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:47.241388    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:47.252431    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:47.252505    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:47.263825    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:47.263897    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:47.275726    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:47.275801    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:47.287278    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:47.287353    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:47.297641    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:47.297714    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:47.308327    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:47.308403    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:47.319659    7666 logs.go:276] 0 containers: []
	W0520 03:37:47.319670    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:47.319735    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:47.331125    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:47.331145    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:47.331153    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:47.336015    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:47.336029    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:47.384137    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:47.384148    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:47.398600    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:47.398611    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:47.412593    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:47.412604    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:47.429584    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:47.429595    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:47.445046    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:47.445057    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:47.469333    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:47.469349    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:47.505692    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:47.505702    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:47.523929    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:47.523939    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:47.535743    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:47.535755    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:47.551488    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:47.551499    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:47.562940    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:47.562950    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:50.076304    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:54.854555    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:54.854664    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:54.866313    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:54.866389    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:54.879739    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:54.879817    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:54.890099    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:54.890161    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:54.900291    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:54.900358    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:54.910695    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:54.910763    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:54.921375    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:54.921435    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:54.931362    7819 logs.go:276] 0 containers: []
	W0520 03:37:54.931372    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:54.931430    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:54.941716    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:54.941734    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:54.941740    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:54.966968    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:54.966979    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:54.981385    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:54.981394    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:54.993049    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:54.993059    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:55.030521    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:55.030536    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:55.041980    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:55.041992    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:55.055908    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:55.055918    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:55.079855    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:55.079862    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:55.116904    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:55.116916    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:55.132179    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:55.132189    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:55.145271    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:55.145282    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:55.161871    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:55.161883    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:55.174745    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:55.174757    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:55.193725    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:55.193736    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:55.206245    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:55.206258    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:55.211233    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:55.211241    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:55.223672    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:55.223681    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:55.078427    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:55.078530    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:55.089980    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:37:55.090057    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:55.107769    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:37:55.107849    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:55.119292    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:37:55.119364    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:55.131106    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:37:55.131178    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:55.142023    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:37:55.142103    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:55.153815    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:37:55.153892    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:55.165846    7666 logs.go:276] 0 containers: []
	W0520 03:37:55.165858    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:55.165923    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:55.176858    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:37:55.176874    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:55.176879    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:55.181814    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:37:55.181826    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:37:55.196624    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:37:55.196632    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:37:55.209995    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:37:55.210006    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:37:55.222748    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:37:55.222761    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:37:55.247362    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:37:55.247372    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:37:55.259513    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:37:55.259527    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:55.277678    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:55.277689    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:55.313450    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:55.313459    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:55.349535    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:37:55.349546    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:37:55.365943    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:37:55.365953    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:37:55.377650    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:37:55.377661    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:37:55.393021    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:55.393031    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:57.740100    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:57.919475    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:02.742564    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:02.742838    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:02.762771    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:02.762871    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:02.777566    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:02.777651    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:02.789114    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:02.789188    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:02.799806    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:02.799883    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:02.810723    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:02.810798    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:02.821523    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:02.821601    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:02.832150    7819 logs.go:276] 0 containers: []
	W0520 03:38:02.832164    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:02.832225    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:02.842550    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:02.842567    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:02.842573    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:02.853706    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:02.853716    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:02.871245    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:02.871258    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:02.895258    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:02.895265    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:02.899443    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:02.899452    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:02.911139    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:02.911149    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:02.950523    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:02.950543    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:02.986834    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:02.986841    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:02.998774    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:02.998783    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:03.011174    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:03.011183    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:03.025661    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:03.025669    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:03.063327    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:03.063339    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:03.078409    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:03.078419    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:03.093554    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:03.093564    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:03.105657    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:03.105665    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:03.121926    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:03.121939    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:03.136817    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:03.136830    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:05.651775    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:02.919860    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:02.919951    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:02.931261    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:02.931329    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:02.942126    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:02.942189    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:02.953146    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:02.953209    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:02.964180    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:02.964249    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:02.975357    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:02.975427    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:02.986750    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:02.986820    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:02.998093    7666 logs.go:276] 0 containers: []
	W0520 03:38:02.998106    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:02.998167    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:03.010773    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:03.010792    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:03.010798    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:03.023938    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:03.023949    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:03.037621    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:03.037634    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:03.064452    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:03.064462    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:03.104082    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:03.104098    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:03.109264    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:03.109275    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:03.147654    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:03.147673    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:03.162671    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:03.162681    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:03.180487    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:03.180497    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:03.192424    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:03.192435    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:03.206364    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:03.206375    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:03.218519    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:03.218533    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:03.233956    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:03.233968    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:05.754724    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:10.653998    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:10.654202    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:10.673093    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:10.673177    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:10.686252    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:10.686331    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:10.697935    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:10.698010    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:10.708509    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:10.708576    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:10.720006    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:10.720070    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:10.730332    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:10.730392    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:10.741006    7819 logs.go:276] 0 containers: []
	W0520 03:38:10.741018    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:10.741080    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:10.751075    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:10.751089    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:10.751094    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:10.791300    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:10.791310    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:10.816644    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:10.816657    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:10.832384    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:10.832396    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:10.852172    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:10.852184    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:10.864797    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:10.864810    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:10.877268    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:10.877282    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:10.889628    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:10.889639    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:10.894639    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:10.894647    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:10.909514    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:10.909524    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:10.921544    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:10.921554    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:10.934625    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:10.934637    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:10.949871    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:10.949887    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:10.987610    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:10.987623    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:10.755652    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:10.755734    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:10.766585    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:10.766657    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:10.777723    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:10.777790    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:10.788743    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:10.788812    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:10.800450    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:10.800516    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:10.811907    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:10.811976    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:10.822916    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:10.822986    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:10.834699    7666 logs.go:276] 0 containers: []
	W0520 03:38:10.834709    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:10.834768    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:10.845950    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:10.845967    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:10.845973    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:10.857957    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:10.857969    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:10.884954    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:10.884973    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:10.925252    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:10.925264    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:10.930399    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:10.930412    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:10.967851    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:10.967864    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:10.980074    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:10.980085    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:11.001323    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:11.001337    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:11.013655    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:11.013668    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:11.028763    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:11.028774    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:11.048059    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:11.048068    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:11.062065    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:11.062075    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:11.074553    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:11.074564    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:11.002199    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:11.004387    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:11.021059    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:11.021070    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:11.047275    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:11.047286    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:13.562287    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:13.595571    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:18.563301    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:18.563433    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:18.576331    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:18.576414    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:18.587541    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:18.587609    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:18.604265    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:18.604331    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:18.615075    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:18.615146    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:18.626730    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:18.626803    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:18.638137    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:18.638206    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:18.649562    7819 logs.go:276] 0 containers: []
	W0520 03:38:18.649576    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:18.649637    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:18.660832    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:18.660849    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:18.660855    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:18.676381    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:18.676397    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:18.692351    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:18.692359    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:18.704439    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:18.704449    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:18.718894    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:18.718904    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:18.733433    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:18.733444    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:18.746419    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:18.746431    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:18.758389    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:18.758400    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:18.798236    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:18.798249    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:18.824300    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:18.824310    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:18.841602    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:18.841611    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:18.861123    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:18.861139    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:18.887176    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:18.887185    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:18.899666    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:18.899674    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:18.903938    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:18.903947    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:18.939979    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:18.939990    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:18.951296    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:18.951307    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:18.596117    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:18.596193    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:18.607730    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:18.607804    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:18.618995    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:18.619064    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:18.630266    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:18.630333    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:18.644037    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:18.644108    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:18.655823    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:18.655896    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:18.667299    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:18.667373    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:18.678484    7666 logs.go:276] 0 containers: []
	W0520 03:38:18.678494    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:18.678551    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:18.689994    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:18.690009    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:18.690015    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:18.702900    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:18.702912    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:18.728389    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:18.728403    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:18.741051    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:18.741064    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:18.780747    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:18.780758    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:18.823028    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:18.823039    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:18.839345    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:18.839357    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:18.855621    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:18.855635    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:18.868859    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:18.868871    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:18.887011    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:18.887023    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:18.899548    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:18.899560    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:18.904325    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:18.904332    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:18.919111    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:18.919123    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:21.433720    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:21.471715    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:26.436006    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:26.436123    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:26.447843    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:26.447914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:26.458298    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:26.458366    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:26.469043    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:26.469116    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:26.480618    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:26.480685    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:26.499631    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:26.499673    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:26.510921    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:26.510964    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:26.521927    7666 logs.go:276] 0 containers: []
	W0520 03:38:26.521936    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:26.521962    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:26.533593    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:26.533607    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:26.533611    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:26.559804    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:26.559819    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:26.599257    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:26.599272    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:26.603963    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:26.603973    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:26.642135    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:26.642148    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:26.657588    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:26.657601    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:26.672795    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:26.672812    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:26.688805    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:26.688817    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:26.701217    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:26.701229    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:26.713738    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:26.713749    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:26.726614    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:26.726626    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:26.738649    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:26.738660    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:26.759701    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:26.759716    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:26.473897    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:26.473966    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:26.485808    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:26.485882    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:26.498292    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:26.498378    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:26.509606    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:26.509679    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:26.521626    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:26.521701    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:26.533262    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:26.533329    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:26.544947    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:26.545026    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:26.557319    7819 logs.go:276] 0 containers: []
	W0520 03:38:26.557330    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:26.557389    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:26.568609    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:26.568625    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:26.568631    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:26.608036    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:26.608049    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:26.647013    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:26.647027    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:26.673023    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:26.673033    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:26.687826    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:26.687839    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:26.706024    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:26.706035    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:26.718778    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:26.718790    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:26.744437    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:26.744453    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:26.749098    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:26.749108    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:26.764201    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:26.764212    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:26.776286    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:26.776297    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:26.787904    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:26.787915    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:26.802504    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:26.802513    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:26.817083    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:26.817093    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:26.827748    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:26.827761    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:26.842975    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:26.842985    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:26.854324    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:26.854334    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:29.367912    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:29.274901    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:34.370207    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:34.370383    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:34.381759    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:34.381823    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:34.393253    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:34.393316    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:34.404638    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:34.404701    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:34.416365    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:34.416433    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:34.427287    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:34.427352    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:34.438456    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:34.438528    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:34.449869    7819 logs.go:276] 0 containers: []
	W0520 03:38:34.449886    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:34.449943    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:34.461981    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:34.461999    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:34.462004    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:34.478408    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:34.478421    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:34.491424    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:34.491434    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:34.503827    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:34.503839    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:34.516397    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:34.516407    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:34.556882    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:34.556892    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:34.561396    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:34.561402    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:34.600950    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:34.600963    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:34.615395    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:34.615411    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:34.629561    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:34.629572    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:34.644720    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:34.644732    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:34.658829    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:34.658838    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:34.682720    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:34.682728    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:34.696094    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:34.696106    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:34.707940    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:34.707954    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:34.733032    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:34.733047    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:34.744997    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:34.745007    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:34.276814    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:34.276974    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:34.287834    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:34.287914    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:34.298224    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:34.298293    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:34.308490    7666 logs.go:276] 2 containers: [e2340c42584c 5cde54d98e3e]
	I0520 03:38:34.308560    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:34.318950    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:34.319011    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:34.329935    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:34.330004    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:34.340331    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:34.340401    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:34.350734    7666 logs.go:276] 0 containers: []
	W0520 03:38:34.350745    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:34.350798    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:34.360888    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:34.360904    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:34.360910    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:34.379281    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:34.379295    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:34.391689    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:34.391700    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:34.431417    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:34.431428    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:34.444241    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:34.444253    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:34.467131    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:34.467143    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:34.480263    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:34.480273    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:34.497509    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:34.497521    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:34.510659    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:34.510675    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:34.535225    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:34.535236    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:34.572847    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:34.572865    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:34.577809    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:34.577816    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:34.593675    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:34.593688    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:37.268863    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:37.111014    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:42.270363    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:42.270445    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:42.282706    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:42.282778    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:42.294581    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:42.294652    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:42.307138    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:42.307210    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:42.318502    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:42.318574    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:42.329855    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:42.329922    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:42.340829    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:42.340897    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:42.351905    7819 logs.go:276] 0 containers: []
	W0520 03:38:42.351916    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:42.351977    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:42.363506    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:42.363524    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:42.363530    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:42.381760    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:42.381770    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:42.394778    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:42.394789    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:42.407817    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:42.407832    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:42.412110    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:42.412118    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:42.436771    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:42.436785    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:42.451568    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:42.451578    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:42.468333    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:42.468346    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:42.508478    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:42.508505    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:42.520435    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:42.520445    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:42.535867    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:42.535876    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:42.547081    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:42.547092    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:42.562036    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:42.562045    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:42.574995    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:42.575007    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:42.609662    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:42.609672    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:42.623357    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:42.623369    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:42.637391    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:42.637401    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:45.162827    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:42.113259    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:42.113483    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:42.137259    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:42.137377    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:42.155457    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:42.155535    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:42.168004    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:38:42.168081    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:42.178714    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:42.178778    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:42.189296    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:42.189369    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:42.200501    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:42.200567    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:42.214232    7666 logs.go:276] 0 containers: []
	W0520 03:38:42.214243    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:42.214298    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:42.225506    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:42.225523    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:42.225529    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:42.244551    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:42.244561    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:42.258514    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:42.258529    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:42.270612    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:42.270619    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:42.298011    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:42.298021    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:42.337817    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:42.337832    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:42.354422    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:42.354431    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:42.366998    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:38:42.367011    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:38:42.378831    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:42.378843    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:42.391438    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:42.391453    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:42.404327    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:42.404340    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:42.409576    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:42.409585    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:42.447906    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:38:42.447919    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:38:42.460805    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:42.460817    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:42.474504    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:42.474515    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:44.997144    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:50.165092    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:50.165235    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:50.176647    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:50.176720    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:50.190033    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:50.190171    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:50.201965    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:50.202035    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:50.218426    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:50.218500    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:50.229689    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:50.229758    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:50.244683    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:50.244758    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:50.258109    7819 logs.go:276] 0 containers: []
	W0520 03:38:50.258127    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:50.258197    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:50.270182    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:50.270211    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:50.270234    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:50.285197    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:50.285207    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:50.298387    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:50.298399    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:50.323851    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:50.323862    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:50.339010    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:50.339023    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:50.366480    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:50.366494    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:50.387510    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:50.387528    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:50.404093    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:50.404109    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:50.442877    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:50.442890    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:50.447140    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:50.447146    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:50.485392    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:50.485403    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:50.499513    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:50.499524    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:50.510584    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:50.510596    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:50.522836    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:50.522846    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:50.537590    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:50.537600    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:50.549476    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:50.549486    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:50.568699    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:50.568708    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:49.999848    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:50.000358    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:50.042086    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:50.042242    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:50.063692    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:50.063789    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:50.078100    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:38:50.078184    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:50.089726    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:50.089794    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:50.100863    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:50.100935    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:50.111914    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:50.111986    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:50.122649    7666 logs.go:276] 0 containers: []
	W0520 03:38:50.122667    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:50.122720    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:50.142943    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:50.142962    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:50.142967    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:50.180244    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:50.180255    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:50.199417    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:50.199429    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:50.212459    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:50.212470    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:50.234577    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:38:50.234593    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:38:50.247549    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:50.247560    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:50.260701    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:50.260710    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:50.273031    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:50.273041    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:50.300113    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:50.300123    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:50.312423    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:50.312435    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:50.352484    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:50.352502    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:50.363505    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:50.363520    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:50.380263    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:50.380278    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:50.396004    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:50.396017    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:50.412894    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:38:50.412906    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:38:53.094791    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:52.926511    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:58.097104    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:58.097191    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:58.109260    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:58.109333    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:58.121003    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:58.121083    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:58.132262    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:58.132333    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:58.143525    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:58.143601    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:58.155554    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:58.155623    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:58.167197    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:58.167267    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:58.177927    7819 logs.go:276] 0 containers: []
	W0520 03:38:58.177940    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:58.178000    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:58.188848    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:58.188868    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:58.188875    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:58.203248    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:58.203256    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:58.220058    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:58.220070    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:58.232622    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:58.232634    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:58.248094    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:58.248103    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:58.264593    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:58.264605    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:58.277129    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:58.277139    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:58.289127    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:58.289138    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:58.301806    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:58.301818    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:58.320855    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:58.320871    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:58.359082    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:58.359100    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:58.364992    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:58.365007    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:58.403045    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:58.403057    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:58.416787    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:58.416800    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:58.441461    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:58.441472    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:58.453607    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:58.453618    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:58.468914    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:58.468924    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:00.994271    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:57.927356    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:57.927599    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:57.950326    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:38:57.950445    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:57.966457    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:38:57.966535    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:57.978545    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:38:57.978611    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:57.989432    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:38:57.989503    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:57.999950    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:38:58.000019    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:58.011127    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:38:58.011197    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:58.021948    7666 logs.go:276] 0 containers: []
	W0520 03:38:58.021959    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:58.022014    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:58.032724    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:38:58.032743    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:58.032748    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:58.069065    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:38:58.069076    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:38:58.086573    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:38:58.086583    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:58.098377    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:58.098385    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:58.103123    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:38:58.103133    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:38:58.115699    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:38:58.115710    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:38:58.128346    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:38:58.128358    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:38:58.147647    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:38:58.147658    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:38:58.160760    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:58.160772    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:58.200510    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:38:58.200527    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:38:58.215293    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:38:58.215309    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:38:58.227387    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:38:58.227397    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:38:58.245323    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:38:58.245336    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:38:58.259673    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:38:58.259687    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:38:58.272685    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:58.272696    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:00.802181    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:05.996657    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:05.996742    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:05.804452    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:05.804680    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:05.826505    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:05.826608    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:05.842171    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:05.842239    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:05.854903    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:05.854978    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:05.866405    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:05.866476    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:05.877344    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:05.877415    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:05.888083    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:05.888153    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:05.897964    7666 logs.go:276] 0 containers: []
	W0520 03:39:05.897974    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:05.898028    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:05.908110    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:05.908128    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:05.908133    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:05.921282    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:05.921293    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:05.932817    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:05.932827    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:05.937195    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:05.937207    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:05.977548    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:05.977560    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:05.991313    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:05.991323    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:06.003841    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:06.003852    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:06.025136    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:06.025148    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:06.052240    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:06.052252    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:06.065245    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:06.065256    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:06.104755    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:06.104772    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:06.124667    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:06.124682    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:06.151837    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:06.151851    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:06.167468    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:06.167480    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:06.179881    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:06.179892    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:06.008160    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:06.008231    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:06.019479    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:06.019550    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:06.030312    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:06.030381    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:06.041366    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:06.041435    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:06.052132    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:06.052200    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:06.063348    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:06.063417    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:06.075152    7819 logs.go:276] 0 containers: []
	W0520 03:39:06.075163    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:06.075222    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:06.088636    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:06.088656    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:06.088662    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:06.114849    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:06.114864    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:06.127403    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:06.127414    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:06.140364    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:06.140375    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:06.152273    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:06.152281    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:06.171127    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:06.171135    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:06.185861    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:06.185873    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:06.202807    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:06.202821    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:06.218407    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:06.218421    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:06.230319    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:06.230331    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:06.268165    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:06.268182    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:06.304613    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:06.304626    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:06.319027    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:06.319041    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:06.330827    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:06.330840    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:06.334973    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:06.334982    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:06.349281    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:06.349290    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:06.360688    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:06.360699    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:08.886507    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:08.699073    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:13.889159    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:13.889322    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:13.900583    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:13.900654    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:13.914618    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:13.914684    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:13.925362    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:13.925438    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:13.936731    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:13.936807    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:13.953116    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:13.953187    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:13.964900    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:13.964966    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:13.976013    7819 logs.go:276] 0 containers: []
	W0520 03:39:13.976025    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:13.976087    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:13.987140    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:13.987158    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:13.987163    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:13.999415    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:13.999428    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:14.012493    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:14.012505    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:14.017749    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:14.017761    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:14.056596    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:14.056610    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:14.070938    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:14.070948    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:14.085546    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:14.085555    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:14.110973    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:14.110984    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:14.129218    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:14.129227    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:14.141378    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:14.141392    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:14.156556    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:14.156566    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:14.171946    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:14.171957    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:14.183473    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:14.183483    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:14.197777    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:14.197787    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:14.221264    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:14.221272    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:14.259165    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:14.259172    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:14.274660    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:14.274670    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:13.701355    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:13.701529    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:13.717301    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:13.717381    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:13.730212    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:13.730285    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:13.741503    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:13.741572    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:13.751834    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:13.751899    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:13.761869    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:13.761931    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:13.772372    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:13.772439    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:13.782807    7666 logs.go:276] 0 containers: []
	W0520 03:39:13.782817    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:13.782866    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:13.795408    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:13.795424    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:13.795429    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:13.810153    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:13.810163    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:13.821760    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:13.821771    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:13.833031    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:13.833044    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:13.847969    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:13.847979    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:13.884875    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:13.884882    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:13.900295    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:13.900310    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:13.912900    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:13.912911    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:13.931566    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:13.931577    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:13.957613    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:13.957629    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:13.962591    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:13.962605    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:14.001821    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:14.001829    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:14.014766    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:14.014777    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:14.027723    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:14.027735    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:14.040461    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:14.040474    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:16.556504    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:16.794981    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:21.558667    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:21.558872    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:21.577327    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:21.577417    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:21.590789    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:21.590865    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:21.602065    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:21.602133    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:21.612528    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:21.612599    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:21.622611    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:21.622688    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:21.633302    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:21.633379    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:21.643635    7666 logs.go:276] 0 containers: []
	W0520 03:39:21.643645    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:21.643698    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:21.654017    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:21.654033    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:21.654038    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:21.667658    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:21.667670    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:21.678830    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:21.678839    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:21.714049    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:21.714059    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:21.728442    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:21.728454    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:21.753753    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:21.753761    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:21.788958    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:21.788971    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:21.801647    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:21.801659    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:21.797106    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:21.797198    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:21.808762    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:21.808837    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:21.820873    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:21.820950    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:21.832187    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:21.832257    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:21.849762    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:21.849839    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:21.861212    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:21.861287    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:21.872959    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:21.873032    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:21.890430    7819 logs.go:276] 0 containers: []
	W0520 03:39:21.890441    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:21.890500    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:21.901576    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:21.901597    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:21.901603    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:21.936204    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:21.936215    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:21.961085    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:21.961100    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:21.984652    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:21.984660    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:21.999330    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:21.999340    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:22.010944    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:22.010956    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:22.024877    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:22.024886    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:22.036843    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:22.036853    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:22.055278    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:22.055292    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:22.067540    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:22.067552    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:22.085120    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:22.085131    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:22.097161    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:22.097171    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:22.135723    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:22.135732    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:22.139784    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:22.139792    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:22.153761    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:22.153776    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:22.165605    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:22.165616    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:22.182337    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:22.182348    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:24.696456    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:21.814182    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:21.814195    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:21.836073    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:21.836084    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:21.841384    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:21.841394    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:21.856165    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:21.856176    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:21.868477    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:21.868490    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:21.884852    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:21.884867    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:21.897253    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:21.897266    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:24.411648    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:29.698559    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:29.698661    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:29.710618    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:29.710690    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:29.722174    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:29.722246    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:29.733379    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:29.733474    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:29.744818    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:29.744893    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:29.755743    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:29.755811    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:29.766909    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:29.766981    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:29.778469    7819 logs.go:276] 0 containers: []
	W0520 03:39:29.778484    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:29.778547    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:29.790606    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:29.790623    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:29.790628    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:29.805392    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:29.805403    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:29.816963    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:29.816977    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:29.854517    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:29.854527    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:29.866426    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:29.866440    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:29.883643    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:29.883652    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:29.908071    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:29.908081    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:29.919829    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:29.919839    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:29.942901    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:29.942911    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:29.958009    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:29.958019    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:29.962588    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:29.962594    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:29.982536    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:29.982545    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:29.993534    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:29.993544    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:30.004158    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:30.004170    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:30.016496    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:30.016506    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:30.051007    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:30.051023    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:30.064768    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:30.064782    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:29.413910    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:29.414175    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:29.436727    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:29.436845    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:29.451562    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:29.451641    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:29.464079    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:29.464157    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:29.475131    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:29.475192    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:29.485391    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:29.485456    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:29.497014    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:29.497083    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:29.506758    7666 logs.go:276] 0 containers: []
	W0520 03:39:29.506772    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:29.506830    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:29.521125    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:29.521144    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:29.521149    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:29.538156    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:29.538167    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:29.553115    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:29.553125    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:29.570178    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:29.570189    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:29.594091    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:29.594098    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:29.629620    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:29.629630    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:29.643744    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:29.643759    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:29.655243    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:29.655254    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:29.670397    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:29.670408    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:29.687019    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:29.687032    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:29.699748    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:29.699759    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:29.716917    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:29.716928    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:29.729315    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:29.729327    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:29.734306    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:29.734313    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:29.772959    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:29.772971    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:32.581259    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:32.289634    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:37.583473    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:37.583594    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:37.594975    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:37.595066    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:37.610441    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:37.610512    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:37.621869    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:37.621945    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:37.638494    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:37.638571    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:37.662027    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:37.662102    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:37.680377    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:37.680459    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:37.694338    7819 logs.go:276] 0 containers: []
	W0520 03:39:37.694350    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:37.694414    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:37.706412    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:37.706430    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:37.706436    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:37.718431    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:37.718441    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:37.742048    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:37.742057    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:37.758264    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:37.758273    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:37.769754    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:37.769764    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:37.791656    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:37.791663    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:37.806029    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:37.806039    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:37.820262    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:37.820272    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:37.838384    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:37.838394    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:37.875207    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:37.875216    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:37.889397    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:37.889406    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:37.914122    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:37.914133    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:37.926366    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:37.926375    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:37.941966    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:37.941975    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:37.946786    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:37.946792    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:37.982358    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:37.982372    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:37.996905    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:37.996915    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:40.510639    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:37.291878    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:37.292111    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:37.315972    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:37.316102    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:37.331519    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:37.331596    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:37.343932    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:37.343999    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:37.357551    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:37.357626    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:37.368160    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:37.368240    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:37.378674    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:37.378746    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:37.392159    7666 logs.go:276] 0 containers: []
	W0520 03:39:37.392169    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:37.392225    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:37.402610    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:37.402633    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:37.402639    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:37.414047    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:37.414061    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:37.442774    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:37.442784    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:37.454075    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:37.454086    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:37.469129    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:37.469140    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:37.484460    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:37.484469    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:37.502903    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:37.502912    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:37.514883    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:37.514898    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:37.551795    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:37.551804    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:37.566083    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:37.566095    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:37.577442    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:37.577452    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:37.591424    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:37.591435    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:37.616474    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:37.616491    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:37.621788    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:37.621800    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:37.637949    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:37.637962    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:40.181113    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:45.512795    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:45.512896    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:45.524099    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:45.524182    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:45.539868    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:45.539952    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:45.550909    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:45.550974    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:45.561999    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:45.562073    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:45.572976    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:45.573041    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:45.588582    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:45.588650    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:45.599325    7819 logs.go:276] 0 containers: []
	W0520 03:39:45.599336    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:45.599396    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:45.615748    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:45.615764    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:45.615769    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:45.663617    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:45.663627    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:45.680200    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:45.680216    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:45.691504    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:45.691514    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:45.708289    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:45.708300    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:45.722435    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:45.722443    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:45.736744    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:45.736754    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:45.747627    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:45.747637    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:45.785586    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:45.785593    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:45.790112    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:45.790118    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:45.814739    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:45.814755    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:45.837068    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:45.837077    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:45.850369    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:45.850379    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:45.862918    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:45.862928    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:45.898822    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:45.898830    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:45.913938    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:45.913947    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:45.925712    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:45.925725    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:45.183425    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:45.183708    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:45.217543    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:45.217668    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:45.235367    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:45.235455    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:45.249482    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:45.249560    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:45.261370    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:45.261445    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:45.272581    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:45.272657    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:45.284004    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:45.284075    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:45.294699    7666 logs.go:276] 0 containers: []
	W0520 03:39:45.294710    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:45.294768    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:45.305605    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:45.305627    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:45.305633    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:45.320216    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:45.320229    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:45.363349    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:45.363363    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:45.377745    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:45.377758    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:45.389472    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:45.389486    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:45.407982    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:45.407994    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:45.412599    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:45.412608    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:45.424318    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:45.424331    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:45.445895    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:45.445906    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:45.458535    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:45.458546    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:45.482163    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:45.482177    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:45.495193    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:45.495205    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:45.507594    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:45.507605    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:45.520009    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:45.520020    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:45.532816    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:45.532831    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:48.450891    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:48.074748    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:53.451340    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:53.451374    7819 kubeadm.go:591] duration metric: took 4m3.83704325s to restartPrimaryControlPlane
	W0520 03:39:53.451403    7819 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 03:39:53.451417    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 03:39:54.453466    7819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002055708s)
	I0520 03:39:54.453530    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 03:39:54.458771    7819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:39:54.461757    7819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:39:54.464513    7819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 03:39:54.464521    7819 kubeadm.go:156] found existing configuration files:
	
	I0520 03:39:54.464544    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf
	I0520 03:39:54.466978    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 03:39:54.467002    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:39:54.469697    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf
	I0520 03:39:54.472832    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 03:39:54.472854    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:39:54.475516    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf
	I0520 03:39:54.478042    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 03:39:54.478063    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:39:54.481129    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf
	I0520 03:39:54.483671    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 03:39:54.483692    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:39:54.486565    7819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 03:39:54.504973    7819 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 03:39:54.505047    7819 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 03:39:54.555266    7819 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 03:39:54.555356    7819 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 03:39:54.555419    7819 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 03:39:54.604032    7819 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 03:39:54.608150    7819 out.go:204]   - Generating certificates and keys ...
	I0520 03:39:54.608185    7819 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 03:39:54.608215    7819 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 03:39:54.608249    7819 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 03:39:54.608289    7819 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 03:39:54.608327    7819 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 03:39:54.608353    7819 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 03:39:54.608386    7819 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 03:39:54.608418    7819 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 03:39:54.608461    7819 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 03:39:54.608498    7819 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 03:39:54.608520    7819 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 03:39:54.608551    7819 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 03:39:54.809249    7819 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 03:39:54.877233    7819 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 03:39:55.039889    7819 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 03:39:55.238747    7819 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 03:39:55.267985    7819 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 03:39:55.268319    7819 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 03:39:55.268340    7819 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 03:39:55.350347    7819 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 03:39:55.353371    7819 out.go:204]   - Booting up control plane ...
	I0520 03:39:55.353501    7819 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 03:39:55.353582    7819 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 03:39:55.353715    7819 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 03:39:55.359486    7819 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 03:39:55.360379    7819 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 03:39:53.077130    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:53.077571    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:53.113109    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:39:53.113245    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:53.133643    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:39:53.133751    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:53.148178    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:39:53.148257    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:53.160214    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:39:53.160285    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:53.172279    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:39:53.172358    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:53.182708    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:39:53.182780    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:53.192559    7666 logs.go:276] 0 containers: []
	W0520 03:39:53.192570    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:53.192626    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:53.204146    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:39:53.204165    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:53.204170    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:53.229262    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:53.229271    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:53.266602    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:39:53.266612    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:39:53.280873    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:39:53.280882    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:39:53.292287    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:39:53.292296    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:39:53.303824    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:39:53.303836    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:39:53.315890    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:39:53.315905    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:39:53.331460    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:39:53.331475    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:39:53.356056    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:53.356068    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:53.361015    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:39:53.361022    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:39:53.375583    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:39:53.375593    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:39:53.389310    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:53.389321    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:53.423873    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:39:53.423883    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:39:53.436205    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:39:53.436216    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:39:53.451907    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:39:53.451915    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:55.966682    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:59.862464    7819 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501729 seconds
	I0520 03:39:59.862544    7819 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 03:39:59.867804    7819 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 03:40:00.377727    7819 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 03:40:00.377849    7819 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-555000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 03:40:00.883816    7819 kubeadm.go:309] [bootstrap-token] Using token: yby7v6.tsn74ll1eer8ce0x
	I0520 03:40:00.889853    7819 out.go:204]   - Configuring RBAC rules ...
	I0520 03:40:00.889921    7819 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 03:40:00.889971    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 03:40:00.897247    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 03:40:00.898140    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 03:40:00.899017    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 03:40:00.899825    7819 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 03:40:00.903118    7819 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 03:40:01.083587    7819 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 03:40:01.290485    7819 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 03:40:01.290505    7819 kubeadm.go:309] 
	I0520 03:40:01.290595    7819 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 03:40:01.290608    7819 kubeadm.go:309] 
	I0520 03:40:01.290722    7819 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 03:40:01.290735    7819 kubeadm.go:309] 
	I0520 03:40:01.290773    7819 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 03:40:01.290849    7819 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 03:40:01.290913    7819 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 03:40:01.290924    7819 kubeadm.go:309] 
	I0520 03:40:01.290997    7819 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 03:40:01.291004    7819 kubeadm.go:309] 
	I0520 03:40:01.291061    7819 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 03:40:01.291070    7819 kubeadm.go:309] 
	I0520 03:40:01.291139    7819 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 03:40:01.291311    7819 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 03:40:01.291390    7819 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 03:40:01.291405    7819 kubeadm.go:309] 
	I0520 03:40:01.291454    7819 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 03:40:01.291563    7819 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 03:40:01.291569    7819 kubeadm.go:309] 
	I0520 03:40:01.291612    7819 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yby7v6.tsn74ll1eer8ce0x \
	I0520 03:40:01.291662    7819 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 \
	I0520 03:40:01.291671    7819 kubeadm.go:309] 	--control-plane 
	I0520 03:40:01.291674    7819 kubeadm.go:309] 
	I0520 03:40:01.291709    7819 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 03:40:01.291712    7819 kubeadm.go:309] 
	I0520 03:40:01.291749    7819 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yby7v6.tsn74ll1eer8ce0x \
	I0520 03:40:01.291804    7819 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 
	I0520 03:40:01.291861    7819 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 03:40:01.291873    7819 cni.go:84] Creating CNI manager for ""
	I0520 03:40:01.291881    7819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:40:01.298762    7819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 03:40:01.301779    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 03:40:01.305305    7819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 03:40:01.310856    7819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 03:40:01.310903    7819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:40:01.310991    7819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-555000 minikube.k8s.io/updated_at=2024_05_20T03_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=stopped-upgrade-555000 minikube.k8s.io/primary=true
	I0520 03:40:01.318852    7819 ops.go:34] apiserver oom_adj: -16
	I0520 03:40:01.358893    7819 kubeadm.go:1107] duration metric: took 48.02525ms to wait for elevateKubeSystemPrivileges
	W0520 03:40:01.358944    7819 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 03:40:01.358949    7819 kubeadm.go:393] duration metric: took 4m11.758568667s to StartCluster
	I0520 03:40:01.358959    7819 settings.go:142] acquiring lock: {Name:mkc3af27fbea4a81f456d1d023b17ad3b4bc78ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:40:01.359047    7819 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:40:01.359443    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/kubeconfig: {Name:mk2c3e0adb489a0347b499d6142b492dee1b48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:40:01.359641    7819 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:40:01.363736    7819 out.go:177] * Verifying Kubernetes components...
	I0520 03:40:01.359651    7819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 03:40:01.359716    7819 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:40:01.373797    7819 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-555000"
	I0520 03:40:01.373814    7819 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-555000"
	W0520 03:40:01.373819    7819 addons.go:243] addon storage-provisioner should already be in state true
	I0520 03:40:01.373830    7819 host.go:66] Checking if "stopped-upgrade-555000" exists ...
	I0520 03:40:01.373852    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:40:01.373866    7819 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-555000"
	I0520 03:40:01.373875    7819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-555000"
	I0520 03:40:01.375066    7819 kapi.go:59] client config for stopped-upgrade-555000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.key", CAFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ea0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:40:01.375187    7819 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-555000"
	W0520 03:40:01.375191    7819 addons.go:243] addon default-storageclass should already be in state true
	I0520 03:40:01.375199    7819 host.go:66] Checking if "stopped-upgrade-555000" exists ...
	I0520 03:40:01.379768    7819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:40:00.968977    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:00.969104    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:40:00.980148    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:40:00.980225    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:40:00.991711    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:40:00.991783    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:40:01.002663    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:40:01.002739    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:40:01.013812    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:40:01.013877    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:40:01.023774    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:40:01.023844    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:40:01.034502    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:40:01.034564    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:40:01.044844    7666 logs.go:276] 0 containers: []
	W0520 03:40:01.044856    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:40:01.044912    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:40:01.055684    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:40:01.055701    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:40:01.055709    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:40:01.095381    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:40:01.095393    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:40:01.111162    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:40:01.111175    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:40:01.126144    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:40:01.126153    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:40:01.138253    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:40:01.138268    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:40:01.142997    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:40:01.143011    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:40:01.155784    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:40:01.155799    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:40:01.168918    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:40:01.168931    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:40:01.187236    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:40:01.187249    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:40:01.200612    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:40:01.200622    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:40:01.212733    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:40:01.212742    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:40:01.251056    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:40:01.251074    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:40:01.263889    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:40:01.263900    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:40:01.282165    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:40:01.282179    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:40:01.294619    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:40:01.294628    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:40:01.383780    7819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:40:01.383786    7819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 03:40:01.383794    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:40:01.384435    7819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 03:40:01.384439    7819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 03:40:01.384442    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:40:01.463302    7819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:40:01.468038    7819 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:40:01.468078    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:40:01.471825    7819 api_server.go:72] duration metric: took 112.174792ms to wait for apiserver process to appear ...
	I0520 03:40:01.471832    7819 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:40:01.471839    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:01.531605    7819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 03:40:01.532567    7819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:40:03.820738    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:06.473942    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:06.473992    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:08.821622    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:08.821746    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:40:08.833022    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:40:08.833100    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:40:08.843850    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:40:08.843917    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:40:08.854255    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:40:08.854318    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:40:08.864519    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:40:08.864588    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:40:08.874449    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:40:08.874513    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:40:08.885034    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:40:08.885100    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:40:08.895715    7666 logs.go:276] 0 containers: []
	W0520 03:40:08.895733    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:40:08.895792    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:40:08.906940    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:40:08.906958    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:40:08.906963    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:40:08.918434    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:40:08.918445    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:40:08.942162    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:40:08.942169    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:40:08.953737    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:40:08.953748    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:40:08.991858    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:40:08.991869    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:40:08.996229    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:40:08.996235    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:40:09.010879    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:40:09.010890    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:40:09.022335    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:40:09.022345    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:40:09.061892    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:40:09.061903    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:40:09.078788    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:40:09.078799    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:40:09.090644    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:40:09.090655    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:40:09.102421    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:40:09.102434    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:40:09.117755    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:40:09.117766    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:40:09.129225    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:40:09.129234    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:40:09.142089    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:40:09.142104    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:40:11.665070    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:11.474225    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:11.474264    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:16.665517    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:16.665645    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:40:16.678248    7666 logs.go:276] 1 containers: [9b7e13c3e9c0]
	I0520 03:40:16.678323    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:40:16.689162    7666 logs.go:276] 1 containers: [95518db1bc0a]
	I0520 03:40:16.689229    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:40:16.699978    7666 logs.go:276] 4 containers: [6322c26e70c9 1ed10418fbc0 e2340c42584c 5cde54d98e3e]
	I0520 03:40:16.700047    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:40:16.710410    7666 logs.go:276] 1 containers: [344f7dc894db]
	I0520 03:40:16.710476    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:40:16.720932    7666 logs.go:276] 1 containers: [44e0fa7db9a5]
	I0520 03:40:16.720993    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:40:16.731803    7666 logs.go:276] 1 containers: [e57d5b2fe37e]
	I0520 03:40:16.731870    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:40:16.742066    7666 logs.go:276] 0 containers: []
	W0520 03:40:16.742077    7666 logs.go:278] No container was found matching "kindnet"
	I0520 03:40:16.742130    7666 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:40:16.752321    7666 logs.go:276] 1 containers: [ce0aea699fc9]
	I0520 03:40:16.752375    7666 logs.go:123] Gathering logs for etcd [95518db1bc0a] ...
	I0520 03:40:16.752381    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95518db1bc0a"
	I0520 03:40:16.766801    7666 logs.go:123] Gathering logs for kube-scheduler [344f7dc894db] ...
	I0520 03:40:16.766811    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 344f7dc894db"
	I0520 03:40:16.782250    7666 logs.go:123] Gathering logs for coredns [6322c26e70c9] ...
	I0520 03:40:16.782264    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6322c26e70c9"
	I0520 03:40:16.794552    7666 logs.go:123] Gathering logs for container status ...
	I0520 03:40:16.794564    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:40:16.806007    7666 logs.go:123] Gathering logs for kube-proxy [44e0fa7db9a5] ...
	I0520 03:40:16.806018    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44e0fa7db9a5"
	I0520 03:40:16.474530    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:16.474575    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:16.820463    7666 logs.go:123] Gathering logs for kube-controller-manager [e57d5b2fe37e] ...
	I0520 03:40:16.820478    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e57d5b2fe37e"
	I0520 03:40:16.837399    7666 logs.go:123] Gathering logs for storage-provisioner [ce0aea699fc9] ...
	I0520 03:40:16.837409    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce0aea699fc9"
	I0520 03:40:16.848706    7666 logs.go:123] Gathering logs for kubelet ...
	I0520 03:40:16.848716    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:40:16.886139    7666 logs.go:123] Gathering logs for kube-apiserver [9b7e13c3e9c0] ...
	I0520 03:40:16.886150    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b7e13c3e9c0"
	I0520 03:40:16.900571    7666 logs.go:123] Gathering logs for coredns [1ed10418fbc0] ...
	I0520 03:40:16.900582    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ed10418fbc0"
	I0520 03:40:16.912664    7666 logs.go:123] Gathering logs for coredns [e2340c42584c] ...
	I0520 03:40:16.912675    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2340c42584c"
	I0520 03:40:16.924858    7666 logs.go:123] Gathering logs for coredns [5cde54d98e3e] ...
	I0520 03:40:16.924869    7666 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cde54d98e3e"
	I0520 03:40:16.936599    7666 logs.go:123] Gathering logs for Docker ...
	I0520 03:40:16.936609    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:40:16.960589    7666 logs.go:123] Gathering logs for dmesg ...
	I0520 03:40:16.960597    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:40:16.965325    7666 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:40:16.965334    7666 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:40:19.504082    7666 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:24.505454    7666 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:24.509020    7666 out.go:177] 
	W0520 03:40:24.511983    7666 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 03:40:24.511993    7666 out.go:239] * 
	W0520 03:40:24.512695    7666 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:40:24.523971    7666 out.go:177] 
	I0520 03:40:21.475411    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:21.475440    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:26.476057    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:26.476103    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:31.477049    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:31.477090    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 03:40:31.876066    7819 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 03:40:31.880112    7819 out.go:177] * Enabled addons: storage-provisioner
	I0520 03:40:31.887987    7819 addons.go:505] duration metric: took 30.528901208s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-05-20 10:31:33 UTC, ends at Mon 2024-05-20 10:40:40 UTC. --
	May 20 10:40:25 running-upgrade-908000 dockerd[3212]: time="2024-05-20T10:40:25.996814836Z" level=warning msg="cleanup warnings time=\"2024-05-20T10:40:25Z\" level=info msg=\"starting signal loop\" namespace=moby pid=18513 runtime=io.containerd.runc.v2\n"
	May 20 10:40:26 running-upgrade-908000 dockerd[3212]: time="2024-05-20T10:40:26.064294442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 20 10:40:26 running-upgrade-908000 dockerd[3212]: time="2024-05-20T10:40:26.064326233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 20 10:40:26 running-upgrade-908000 dockerd[3212]: time="2024-05-20T10:40:26.064332316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 20 10:40:26 running-upgrade-908000 dockerd[3212]: time="2024-05-20T10:40:26.064458021Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/87c00c7eb8e8418ab7c65378277be71eefe67bb69599ad1177e22a24a4b007b2 pid=18534 runtime=io.containerd.runc.v2
	May 20 10:40:26 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:26Z" level=error msg="ContainerStats resp: {0x400035b600 linux}"
	May 20 10:40:26 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:26Z" level=error msg="ContainerStats resp: {0x4000a2aa00 linux}"
	May 20 10:40:26 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:26Z" level=error msg="ContainerStats resp: {0x400078a100 linux}"
	May 20 10:40:26 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:26Z" level=error msg="ContainerStats resp: {0x400078a280 linux}"
	May 20 10:40:26 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:26Z" level=error msg="ContainerStats resp: {0x4000a2b880 linux}"
	May 20 10:40:26 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:26Z" level=error msg="ContainerStats resp: {0x400078b8c0 linux}"
	May 20 10:40:26 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:26Z" level=error msg="ContainerStats resp: {0x4000774440 linux}"
	May 20 10:40:27 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:27Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 10:40:32 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:32Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 10:40:37 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:37Z" level=error msg="ContainerStats resp: {0x4000835600 linux}"
	May 20 10:40:37 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:37Z" level=error msg="ContainerStats resp: {0x4000835840 linux}"
	May 20 10:40:37 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 20 10:40:38 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:38Z" level=error msg="ContainerStats resp: {0x400078a400 linux}"
	May 20 10:40:39 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:39Z" level=error msg="ContainerStats resp: {0x400078b340 linux}"
	May 20 10:40:39 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:39Z" level=error msg="ContainerStats resp: {0x400078b500 linux}"
	May 20 10:40:39 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:39Z" level=error msg="ContainerStats resp: {0x400078b980 linux}"
	May 20 10:40:39 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:39Z" level=error msg="ContainerStats resp: {0x400078bdc0 linux}"
	May 20 10:40:39 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:39Z" level=error msg="ContainerStats resp: {0x4000514500 linux}"
	May 20 10:40:39 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:39Z" level=error msg="ContainerStats resp: {0x4000514a40 linux}"
	May 20 10:40:39 running-upgrade-908000 cri-dockerd[3054]: time="2024-05-20T10:40:39Z" level=error msg="ContainerStats resp: {0x4000775340 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	87c00c7eb8e84       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   d911019486953
	8d9b526b19acd       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   21e76bb54a360
	6322c26e70c9b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   d911019486953
	1ed10418fbc0b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   21e76bb54a360
	44e0fa7db9a51       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   a82f758b5c9d3
	ce0aea699fc93       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   afe914290275c
	344f7dc894db0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   f2f3dd8e2d612
	95518db1bc0ab       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   4c8b4b85b43f2
	e57d5b2fe37e8       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   b42a904f132d3
	9b7e13c3e9c0a       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   35153e3736d08
	
	
	==> coredns [1ed10418fbc0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:46554->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:38923->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:53955->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:40229->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:36236->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:43587->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:42734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:48090->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:47065->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3276253193927837829.4835813876719647427. HINFO: read udp 10.244.0.3:35372->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6322c26e70c9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:46310->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:33827->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:34691->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:52322->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:47227->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:45523->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:50668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:56179->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:43519->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2972377893876405700.6426133476545529362. HINFO: read udp 10.244.0.2:41756->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [87c00c7eb8e8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4925890637307454818.2760846207447674399. HINFO: read udp 10.244.0.2:50199->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4925890637307454818.2760846207447674399. HINFO: read udp 10.244.0.2:46231->10.0.2.3:53: i/o timeout
	
	
	==> coredns [8d9b526b19ac] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6794231336016055293.2700741766861995086. HINFO: read udp 10.244.0.3:36193->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6794231336016055293.2700741766861995086. HINFO: read udp 10.244.0.3:40313->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6794231336016055293.2700741766861995086. HINFO: read udp 10.244.0.3:39192->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-908000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-908000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=running-upgrade-908000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T03_36_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:36:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-908000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:40:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:36:23 +0000   Mon, 20 May 2024 10:36:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:36:23 +0000   Mon, 20 May 2024 10:36:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:36:23 +0000   Mon, 20 May 2024 10:36:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:36:23 +0000   Mon, 20 May 2024 10:36:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-908000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ac4b12686b847449485996ce12c25b6
	  System UUID:                1ac4b12686b847449485996ce12c25b6
	  Boot ID:                    d9b74c0f-1176-4e1d-bd58-83e27ae936fc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4s9gj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-zxxcr                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-908000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-908000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-908000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-42krm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-908000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-908000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-908000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-908000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-908000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-908000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-908000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-908000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-908000 event: Registered Node running-upgrade-908000 in Controller
	
	
	==> dmesg <==
	[  +1.790183] systemd-fstab-generator[873]: Ignoring "noauto" for root device
	[  +0.069424] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.062691] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +1.148803] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.079823] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.080860] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +2.932595] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[ +10.140548] systemd-fstab-generator[1936]: Ignoring "noauto" for root device
	[May20 10:32] systemd-fstab-generator[2212]: Ignoring "noauto" for root device
	[  +0.142671] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +0.087451] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[  +0.105422] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +2.662134] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.187351] systemd-fstab-generator[3009]: Ignoring "noauto" for root device
	[  +0.078404] systemd-fstab-generator[3022]: Ignoring "noauto" for root device
	[  +0.068089] systemd-fstab-generator[3033]: Ignoring "noauto" for root device
	[  +0.092487] systemd-fstab-generator[3047]: Ignoring "noauto" for root device
	[  +2.174678] systemd-fstab-generator[3198]: Ignoring "noauto" for root device
	[  +3.418152] systemd-fstab-generator[3593]: Ignoring "noauto" for root device
	[  +1.011772] systemd-fstab-generator[3890]: Ignoring "noauto" for root device
	[ +18.772022] kauditd_printk_skb: 68 callbacks suppressed
	[May20 10:36] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.561607] systemd-fstab-generator[11585]: Ignoring "noauto" for root device
	[  +5.620813] systemd-fstab-generator[12199]: Ignoring "noauto" for root device
	[  +0.472062] systemd-fstab-generator[12333]: Ignoring "noauto" for root device
	
	
	==> etcd [95518db1bc0a] <==
	{"level":"info","ts":"2024-05-20T10:36:18.877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-05-20T10:36:18.878Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-05-20T10:36:18.895Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T10:36:18.895Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T10:36:18.895Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T10:36:18.895Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-20T10:36:18.895Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-20T10:36:19.843Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-908000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T10:36:19.844Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:36:19.844Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:36:19.844Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:36:19.845Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:36:19.845Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:36:19.845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T10:36:19.845Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T10:36:19.845Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:36:19.845Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T10:36:19.846Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 10:40:40 up 9 min,  0 users,  load average: 0.33, 0.39, 0.20
	Linux running-upgrade-908000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9b7e13c3e9c0] <==
	I0520 10:36:21.076266       1 cache.go:39] Caches are synced for autoregister controller
	I0520 10:36:21.076392       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0520 10:36:21.076454       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 10:36:21.076637       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0520 10:36:21.079436       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0520 10:36:21.082757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 10:36:21.114677       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0520 10:36:21.809647       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0520 10:36:21.982390       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 10:36:21.985087       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 10:36:21.985096       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 10:36:22.122385       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 10:36:22.131964       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 10:36:22.159229       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0520 10:36:22.162362       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0520 10:36:22.162767       1 controller.go:611] quota admission added evaluator for: endpoints
	I0520 10:36:22.164086       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 10:36:23.124058       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0520 10:36:23.363330       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0520 10:36:23.366871       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0520 10:36:23.371609       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0520 10:36:23.441588       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 10:36:37.486501       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0520 10:36:37.586492       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0520 10:36:37.988422       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e57d5b2fe37e] <==
	I0520 10:36:36.885171       1 shared_informer.go:262] Caches are synced for PV protection
	I0520 10:36:36.885214       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0520 10:36:36.885248       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0520 10:36:36.885262       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0520 10:36:36.885280       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0520 10:36:36.885329       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0520 10:36:36.885366       1 shared_informer.go:262] Caches are synced for GC
	I0520 10:36:36.885435       1 shared_informer.go:262] Caches are synced for cronjob
	I0520 10:36:36.886608       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0520 10:36:36.890921       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0520 10:36:36.894069       1 shared_informer.go:262] Caches are synced for job
	I0520 10:36:36.935690       1 shared_informer.go:262] Caches are synced for endpoint
	I0520 10:36:37.034128       1 shared_informer.go:262] Caches are synced for HPA
	I0520 10:36:37.040147       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 10:36:37.085118       1 shared_informer.go:262] Caches are synced for disruption
	I0520 10:36:37.085126       1 disruption.go:371] Sending events to api server.
	I0520 10:36:37.087875       1 shared_informer.go:262] Caches are synced for resource quota
	I0520 10:36:37.127993       1 shared_informer.go:262] Caches are synced for stateful set
	I0520 10:36:37.491234       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-42krm"
	I0520 10:36:37.508401       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 10:36:37.535171       1 shared_informer.go:262] Caches are synced for garbage collector
	I0520 10:36:37.535258       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0520 10:36:37.587685       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0520 10:36:37.887814       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4s9gj"
	I0520 10:36:37.895498       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zxxcr"
	
	
	==> kube-proxy [44e0fa7db9a5] <==
	I0520 10:36:37.976423       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0520 10:36:37.976446       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0520 10:36:37.976456       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0520 10:36:37.985864       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0520 10:36:37.985874       1 server_others.go:206] "Using iptables Proxier"
	I0520 10:36:37.985899       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0520 10:36:37.986019       1 server.go:661] "Version info" version="v1.24.1"
	I0520 10:36:37.986057       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:36:37.986370       1 config.go:317] "Starting service config controller"
	I0520 10:36:37.986381       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0520 10:36:37.986390       1 config.go:226] "Starting endpoint slice config controller"
	I0520 10:36:37.986407       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0520 10:36:37.986685       1 config.go:444] "Starting node config controller"
	I0520 10:36:37.986708       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0520 10:36:38.086874       1 shared_informer.go:262] Caches are synced for node config
	I0520 10:36:38.086894       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0520 10:36:38.086904       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [344f7dc894db] <==
	W0520 10:36:21.050790       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:36:21.050797       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:36:21.050810       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:36:21.050858       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:36:21.050882       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 10:36:21.050889       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 10:36:21.050910       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:36:21.050926       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:36:21.050967       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:36:21.050976       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:36:21.051054       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:36:21.051062       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:36:21.935296       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:36:21.935464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:36:21.935749       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:36:21.935766       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:36:21.955782       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:36:21.955809       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:36:21.958526       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:36:21.958554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:36:22.049000       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:36:22.049095       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:36:22.050713       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 10:36:22.050763       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0520 10:36:22.148153       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-05-20 10:31:33 UTC, ends at Mon 2024-05-20 10:40:41 UTC. --
	May 20 10:36:24 running-upgrade-908000 kubelet[12205]: I0520 10:36:24.825516   12205 reconciler.go:157] "Reconciler: start to sync state"
	May 20 10:36:24 running-upgrade-908000 kubelet[12205]: E0520 10:36:24.999353   12205 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-908000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-908000"
	May 20 10:36:25 running-upgrade-908000 kubelet[12205]: E0520 10:36:25.203163   12205 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-908000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-908000"
	May 20 10:36:25 running-upgrade-908000 kubelet[12205]: E0520 10:36:25.413694   12205 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-908000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-908000"
	May 20 10:36:25 running-upgrade-908000 kubelet[12205]: I0520 10:36:25.591556   12205 request.go:601] Waited for 1.100121043s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 20 10:36:25 running-upgrade-908000 kubelet[12205]: E0520 10:36:25.595305   12205 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-908000\" already exists" pod="kube-system/etcd-running-upgrade-908000"
	May 20 10:36:36 running-upgrade-908000 kubelet[12205]: I0520 10:36:36.841627   12205 topology_manager.go:200] "Topology Admit Handler"
	May 20 10:36:36 running-upgrade-908000 kubelet[12205]: I0520 10:36:36.942362   12205 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 10:36:36 running-upgrade-908000 kubelet[12205]: I0520 10:36:36.942812   12205 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.042662   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/445e8e5c-43b4-489a-8ca9-8b488ddcc0df-tmp\") pod \"storage-provisioner\" (UID: \"445e8e5c-43b4-489a-8ca9-8b488ddcc0df\") " pod="kube-system/storage-provisioner"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.042694   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxws2\" (UniqueName: \"kubernetes.io/projected/445e8e5c-43b4-489a-8ca9-8b488ddcc0df-kube-api-access-pxws2\") pod \"storage-provisioner\" (UID: \"445e8e5c-43b4-489a-8ca9-8b488ddcc0df\") " pod="kube-system/storage-provisioner"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.496095   12205 topology_manager.go:200] "Topology Admit Handler"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.646498   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a10bb03e-4170-4b54-af78-d69e3f862497-lib-modules\") pod \"kube-proxy-42krm\" (UID: \"a10bb03e-4170-4b54-af78-d69e3f862497\") " pod="kube-system/kube-proxy-42krm"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.646518   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a10bb03e-4170-4b54-af78-d69e3f862497-kube-proxy\") pod \"kube-proxy-42krm\" (UID: \"a10bb03e-4170-4b54-af78-d69e3f862497\") " pod="kube-system/kube-proxy-42krm"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.646528   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a10bb03e-4170-4b54-af78-d69e3f862497-xtables-lock\") pod \"kube-proxy-42krm\" (UID: \"a10bb03e-4170-4b54-af78-d69e3f862497\") " pod="kube-system/kube-proxy-42krm"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.646540   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lznn\" (UniqueName: \"kubernetes.io/projected/a10bb03e-4170-4b54-af78-d69e3f862497-kube-api-access-9lznn\") pod \"kube-proxy-42krm\" (UID: \"a10bb03e-4170-4b54-af78-d69e3f862497\") " pod="kube-system/kube-proxy-42krm"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.891769   12205 topology_manager.go:200] "Topology Admit Handler"
	May 20 10:36:37 running-upgrade-908000 kubelet[12205]: I0520 10:36:37.896033   12205 topology_manager.go:200] "Topology Admit Handler"
	May 20 10:36:38 running-upgrade-908000 kubelet[12205]: I0520 10:36:38.049352   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w78vj\" (UniqueName: \"kubernetes.io/projected/5b1afe9a-046c-4628-b5ea-ea337fa0a86b-kube-api-access-w78vj\") pod \"coredns-6d4b75cb6d-4s9gj\" (UID: \"5b1afe9a-046c-4628-b5ea-ea337fa0a86b\") " pod="kube-system/coredns-6d4b75cb6d-4s9gj"
	May 20 10:36:38 running-upgrade-908000 kubelet[12205]: I0520 10:36:38.049384   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslvf\" (UniqueName: \"kubernetes.io/projected/20c86d32-025a-4871-801e-ccbd06fb1dbe-kube-api-access-lslvf\") pod \"coredns-6d4b75cb6d-zxxcr\" (UID: \"20c86d32-025a-4871-801e-ccbd06fb1dbe\") " pod="kube-system/coredns-6d4b75cb6d-zxxcr"
	May 20 10:36:38 running-upgrade-908000 kubelet[12205]: I0520 10:36:38.049399   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c86d32-025a-4871-801e-ccbd06fb1dbe-config-volume\") pod \"coredns-6d4b75cb6d-zxxcr\" (UID: \"20c86d32-025a-4871-801e-ccbd06fb1dbe\") " pod="kube-system/coredns-6d4b75cb6d-zxxcr"
	May 20 10:36:38 running-upgrade-908000 kubelet[12205]: I0520 10:36:38.049414   12205 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b1afe9a-046c-4628-b5ea-ea337fa0a86b-config-volume\") pod \"coredns-6d4b75cb6d-4s9gj\" (UID: \"5b1afe9a-046c-4628-b5ea-ea337fa0a86b\") " pod="kube-system/coredns-6d4b75cb6d-4s9gj"
	May 20 10:36:38 running-upgrade-908000 kubelet[12205]: I0520 10:36:38.646255   12205 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d9110194869539e0aa6e1c37b4398839e48946f97f5cea995ff4a79c973b48ec"
	May 20 10:40:25 running-upgrade-908000 kubelet[12205]: I0520 10:40:25.943362   12205 scope.go:110] "RemoveContainer" containerID="e2340c42584cf518c146f82425c4ac680a26cab2ea896157ba8b7bb6b3a3bf1d"
	May 20 10:40:26 running-upgrade-908000 kubelet[12205]: I0520 10:40:26.960887   12205 scope.go:110] "RemoveContainer" containerID="5cde54d98e3eb58604e46cde219f1979966e9d4fdd1524ab10939934054f6389"
	
	
	==> storage-provisioner [ce0aea699fc9] <==
	I0520 10:36:37.353562       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:36:37.357637       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:36:37.357653       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:36:37.360969       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:36:37.361038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-908000_2d893565-00dd-4a3e-a320-0c5448f56560!
	I0520 10:36:37.361275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9debbbde-82dc-43dc-ac94-6793eaef8c11", APIVersion:"v1", ResourceVersion:"319", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-908000_2d893565-00dd-4a3e-a320-0c5448f56560 became leader
	I0520 10:36:37.461443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-908000_2d893565-00dd-4a3e-a320-0c5448f56560!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-908000 -n running-upgrade-908000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-908000 -n running-upgrade-908000: exit status 2 (15.64362125s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-908000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-908000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-908000: (1.133153875s)
--- FAIL: TestRunningBinaryUpgrade (588.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-008000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-008000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.939710083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-008000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-008000" primary control-plane node in "kubernetes-upgrade-008000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-008000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:34:09.129163    7746 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:34:09.129295    7746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:34:09.129298    7746 out.go:304] Setting ErrFile to fd 2...
	I0520 03:34:09.129300    7746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:34:09.129430    7746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:34:09.130477    7746 out.go:298] Setting JSON to false
	I0520 03:34:09.146824    7746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5620,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:34:09.146936    7746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:34:09.151105    7746 out.go:177] * [kubernetes-upgrade-008000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:34:09.159080    7746 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:34:09.159135    7746 notify.go:220] Checking for updates...
	I0520 03:34:09.163048    7746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:34:09.170052    7746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:34:09.174025    7746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:34:09.177012    7746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:34:09.181012    7746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:34:09.184365    7746 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:34:09.184427    7746 config.go:182] Loaded profile config "running-upgrade-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:34:09.184480    7746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:34:09.188016    7746 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:34:09.195040    7746 start.go:297] selected driver: qemu2
	I0520 03:34:09.195046    7746 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:34:09.195051    7746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:34:09.197522    7746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:34:09.199935    7746 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:34:09.203070    7746 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:34:09.203083    7746 cni.go:84] Creating CNI manager for ""
	I0520 03:34:09.203090    7746 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 03:34:09.203115    7746 start.go:340] cluster config:
	{Name:kubernetes-upgrade-008000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:34:09.207507    7746 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:34:09.215028    7746 out.go:177] * Starting "kubernetes-upgrade-008000" primary control-plane node in "kubernetes-upgrade-008000" cluster
	I0520 03:34:09.219016    7746 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:34:09.219031    7746 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 03:34:09.219040    7746 cache.go:56] Caching tarball of preloaded images
	I0520 03:34:09.219090    7746 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:34:09.219095    7746 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 03:34:09.219140    7746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/kubernetes-upgrade-008000/config.json ...
	I0520 03:34:09.219150    7746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/kubernetes-upgrade-008000/config.json: {Name:mk12c62ac7e408428f84f18ef05a4098bc7ba29f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:34:09.219439    7746 start.go:360] acquireMachinesLock for kubernetes-upgrade-008000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:34:09.219472    7746 start.go:364] duration metric: took 26µs to acquireMachinesLock for "kubernetes-upgrade-008000"
	I0520 03:34:09.219484    7746 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:34:09.219516    7746 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:34:09.228025    7746 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:34:09.242574    7746 start.go:159] libmachine.API.Create for "kubernetes-upgrade-008000" (driver="qemu2")
	I0520 03:34:09.242600    7746 client.go:168] LocalClient.Create starting
	I0520 03:34:09.242661    7746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:34:09.242696    7746 main.go:141] libmachine: Decoding PEM data...
	I0520 03:34:09.242706    7746 main.go:141] libmachine: Parsing certificate...
	I0520 03:34:09.242742    7746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:34:09.242768    7746 main.go:141] libmachine: Decoding PEM data...
	I0520 03:34:09.242779    7746 main.go:141] libmachine: Parsing certificate...
	I0520 03:34:09.243185    7746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:34:09.386420    7746 main.go:141] libmachine: Creating SSH key...
	I0520 03:34:09.575814    7746 main.go:141] libmachine: Creating Disk image...
	I0520 03:34:09.575823    7746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:34:09.576041    7746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:09.590353    7746 main.go:141] libmachine: STDOUT: 
	I0520 03:34:09.590373    7746 main.go:141] libmachine: STDERR: 
	I0520 03:34:09.590448    7746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2 +20000M
	I0520 03:34:09.602307    7746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:34:09.602327    7746 main.go:141] libmachine: STDERR: 
	I0520 03:34:09.602349    7746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:09.602354    7746 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:34:09.602393    7746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:c4:08:d5:e1:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:09.604468    7746 main.go:141] libmachine: STDOUT: 
	I0520 03:34:09.604484    7746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:34:09.604504    7746 client.go:171] duration metric: took 361.906833ms to LocalClient.Create
	I0520 03:34:11.606741    7746 start.go:128] duration metric: took 2.387238875s to createHost
	I0520 03:34:11.606831    7746 start.go:83] releasing machines lock for "kubernetes-upgrade-008000", held for 2.387394292s
	W0520 03:34:11.606886    7746 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:34:11.617421    7746 out.go:177] * Deleting "kubernetes-upgrade-008000" in qemu2 ...
	W0520 03:34:11.642889    7746 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:34:11.642941    7746 start.go:728] Will try again in 5 seconds ...
	I0520 03:34:16.645162    7746 start.go:360] acquireMachinesLock for kubernetes-upgrade-008000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:34:16.645799    7746 start.go:364] duration metric: took 505.333µs to acquireMachinesLock for "kubernetes-upgrade-008000"
	I0520 03:34:16.645963    7746 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:34:16.646379    7746 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:34:16.655083    7746 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:34:16.702706    7746 start.go:159] libmachine.API.Create for "kubernetes-upgrade-008000" (driver="qemu2")
	I0520 03:34:16.702759    7746 client.go:168] LocalClient.Create starting
	I0520 03:34:16.702882    7746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:34:16.702948    7746 main.go:141] libmachine: Decoding PEM data...
	I0520 03:34:16.702964    7746 main.go:141] libmachine: Parsing certificate...
	I0520 03:34:16.703023    7746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:34:16.703068    7746 main.go:141] libmachine: Decoding PEM data...
	I0520 03:34:16.703082    7746 main.go:141] libmachine: Parsing certificate...
	I0520 03:34:16.703717    7746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:34:16.867908    7746 main.go:141] libmachine: Creating SSH key...
	I0520 03:34:16.972705    7746 main.go:141] libmachine: Creating Disk image...
	I0520 03:34:16.972713    7746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:34:16.972899    7746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:16.985943    7746 main.go:141] libmachine: STDOUT: 
	I0520 03:34:16.985968    7746 main.go:141] libmachine: STDERR: 
	I0520 03:34:16.986032    7746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2 +20000M
	I0520 03:34:16.997201    7746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:34:16.997222    7746 main.go:141] libmachine: STDERR: 
	I0520 03:34:16.997234    7746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:16.997238    7746 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:34:16.997275    7746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:f2:d1:c1:96:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:16.999081    7746 main.go:141] libmachine: STDOUT: 
	I0520 03:34:16.999101    7746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:34:16.999115    7746 client.go:171] duration metric: took 296.356125ms to LocalClient.Create
	I0520 03:34:19.001268    7746 start.go:128] duration metric: took 2.354896208s to createHost
	I0520 03:34:19.001386    7746 start.go:83] releasing machines lock for "kubernetes-upgrade-008000", held for 2.355602625s
	W0520 03:34:19.001786    7746 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-008000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-008000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:34:19.013505    7746 out.go:177] 
	W0520 03:34:19.017605    7746 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:34:19.017633    7746 out.go:239] * 
	* 
	W0520 03:34:19.020391    7746 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:34:19.029448    7746 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-008000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-008000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-008000: (3.651919042s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-008000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-008000 status --format={{.Host}}: exit status 7 (62.963458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-008000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-008000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184978958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-008000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-008000" primary control-plane node in "kubernetes-upgrade-008000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-008000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-008000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:34:22.788722    7785 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:34:22.788861    7785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:34:22.788867    7785 out.go:304] Setting ErrFile to fd 2...
	I0520 03:34:22.788869    7785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:34:22.788982    7785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:34:22.789951    7785 out.go:298] Setting JSON to false
	I0520 03:34:22.806187    7785 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5633,"bootTime":1716195629,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:34:22.806251    7785 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:34:22.811397    7785 out.go:177] * [kubernetes-upgrade-008000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:34:22.818344    7785 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:34:22.822335    7785 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:34:22.818428    7785 notify.go:220] Checking for updates...
	I0520 03:34:22.825340    7785 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:34:22.828252    7785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:34:22.831319    7785 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:34:22.834350    7785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:34:22.837531    7785 config.go:182] Loaded profile config "kubernetes-upgrade-008000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 03:34:22.837772    7785 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:34:22.842297    7785 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:34:22.848310    7785 start.go:297] selected driver: qemu2
	I0520 03:34:22.848319    7785 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-008000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:34:22.848415    7785 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:34:22.850576    7785 cni.go:84] Creating CNI manager for ""
	I0520 03:34:22.850593    7785 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:34:22.850614    7785 start.go:340] cluster config:
	{Name:kubernetes-upgrade-008000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-008000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:34:22.854684    7785 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:34:22.862335    7785 out.go:177] * Starting "kubernetes-upgrade-008000" primary control-plane node in "kubernetes-upgrade-008000" cluster
	I0520 03:34:22.866303    7785 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:34:22.866319    7785 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:34:22.866331    7785 cache.go:56] Caching tarball of preloaded images
	I0520 03:34:22.866391    7785 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:34:22.866402    7785 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:34:22.866453    7785 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/kubernetes-upgrade-008000/config.json ...
	I0520 03:34:22.866802    7785 start.go:360] acquireMachinesLock for kubernetes-upgrade-008000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:34:22.866828    7785 start.go:364] duration metric: took 21µs to acquireMachinesLock for "kubernetes-upgrade-008000"
	I0520 03:34:22.866837    7785 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:34:22.866844    7785 fix.go:54] fixHost starting: 
	I0520 03:34:22.866955    7785 fix.go:112] recreateIfNeeded on kubernetes-upgrade-008000: state=Stopped err=<nil>
	W0520 03:34:22.866964    7785 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:34:22.875348    7785 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-008000" ...
	I0520 03:34:22.879312    7785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:f2:d1:c1:96:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:22.881147    7785 main.go:141] libmachine: STDOUT: 
	I0520 03:34:22.881171    7785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:34:22.881203    7785 fix.go:56] duration metric: took 14.359458ms for fixHost
	I0520 03:34:22.881208    7785 start.go:83] releasing machines lock for "kubernetes-upgrade-008000", held for 14.375917ms
	W0520 03:34:22.881212    7785 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:34:22.881237    7785 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:34:22.881240    7785 start.go:728] Will try again in 5 seconds ...
	I0520 03:34:27.881469    7785 start.go:360] acquireMachinesLock for kubernetes-upgrade-008000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:34:27.881977    7785 start.go:364] duration metric: took 391.917µs to acquireMachinesLock for "kubernetes-upgrade-008000"
	I0520 03:34:27.882059    7785 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:34:27.882078    7785 fix.go:54] fixHost starting: 
	I0520 03:34:27.882787    7785 fix.go:112] recreateIfNeeded on kubernetes-upgrade-008000: state=Stopped err=<nil>
	W0520 03:34:27.882814    7785 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:34:27.892514    7785 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-008000" ...
	I0520 03:34:27.896669    7785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:f2:d1:c1:96:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubernetes-upgrade-008000/disk.qcow2
	I0520 03:34:27.906661    7785 main.go:141] libmachine: STDOUT: 
	I0520 03:34:27.906724    7785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:34:27.906799    7785 fix.go:56] duration metric: took 24.72275ms for fixHost
	I0520 03:34:27.906822    7785 start.go:83] releasing machines lock for "kubernetes-upgrade-008000", held for 24.820791ms
	W0520 03:34:27.907023    7785 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-008000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-008000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:34:27.915543    7785 out.go:177] 
	W0520 03:34:27.918669    7785 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:34:27.918692    7785 out.go:239] * 
	* 
	W0520 03:34:27.921470    7785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:34:27.930584    7785 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-008000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-008000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-008000 version --output=json: exit status 1 (65.337667ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-008000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-05-20 03:34:28.011333 -0700 PDT m=+927.532519751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-008000 -n kubernetes-upgrade-008000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-008000 -n kubernetes-upgrade-008000: exit status 7 (32.138333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-008000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-008000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-008000
--- FAIL: TestKubernetesUpgrade (19.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.98s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18925
- KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2357388472/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.98s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.98s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=18925
- KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3222718878/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2581433343 start -p stopped-upgrade-555000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2581433343 start -p stopped-upgrade-555000 --memory=2200 --vm-driver=qemu2 : (39.6542715s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2581433343 -p stopped-upgrade-555000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2581433343 -p stopped-upgrade-555000 stop: (12.098225833s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-555000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-555000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.525783083s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-555000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-555000" primary control-plane node in "stopped-upgrade-555000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-555000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:35:21.003210    7819 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:35:21.003335    7819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:35:21.003342    7819 out.go:304] Setting ErrFile to fd 2...
	I0520 03:35:21.003344    7819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:35:21.003464    7819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:35:21.004453    7819 out.go:298] Setting JSON to false
	I0520 03:35:21.021683    7819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5692,"bootTime":1716195629,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:35:21.021756    7819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:35:21.026450    7819 out.go:177] * [stopped-upgrade-555000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:35:21.034374    7819 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:35:21.037465    7819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:35:21.034457    7819 notify.go:220] Checking for updates...
	I0520 03:35:21.043455    7819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:35:21.046435    7819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:35:21.047794    7819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:35:21.050431    7819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:35:21.053700    7819 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:35:21.057413    7819 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 03:35:21.060353    7819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:35:21.064361    7819 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:35:21.071418    7819 start.go:297] selected driver: qemu2
	I0520 03:35:21.071425    7819 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:35:21.071493    7819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:35:21.074019    7819 cni.go:84] Creating CNI manager for ""
	I0520 03:35:21.074035    7819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:35:21.074059    7819 start.go:340] cluster config:
	{Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:35:21.074113    7819 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:35:21.080284    7819 out.go:177] * Starting "stopped-upgrade-555000" primary control-plane node in "stopped-upgrade-555000" cluster
	I0520 03:35:21.084395    7819 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 03:35:21.084412    7819 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0520 03:35:21.084423    7819 cache.go:56] Caching tarball of preloaded images
	I0520 03:35:21.084484    7819 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:35:21.084491    7819 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0520 03:35:21.084548    7819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/config.json ...
	I0520 03:35:21.084932    7819 start.go:360] acquireMachinesLock for stopped-upgrade-555000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:35:21.084960    7819 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "stopped-upgrade-555000"
	I0520 03:35:21.084970    7819 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:35:21.084977    7819 fix.go:54] fixHost starting: 
	I0520 03:35:21.085090    7819 fix.go:112] recreateIfNeeded on stopped-upgrade-555000: state=Stopped err=<nil>
	W0520 03:35:21.085098    7819 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:35:21.089374    7819 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-555000" ...
	I0520 03:35:21.097500    7819 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51268-:22,hostfwd=tcp::51269-:2376,hostname=stopped-upgrade-555000 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/disk.qcow2
	I0520 03:35:21.144629    7819 main.go:141] libmachine: STDOUT: 
	I0520 03:35:21.144665    7819 main.go:141] libmachine: STDERR: 
	I0520 03:35:21.144672    7819 main.go:141] libmachine: Waiting for VM to start (ssh -p 51268 docker@127.0.0.1)...
	I0520 03:35:41.067104    7819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/config.json ...
	I0520 03:35:41.067753    7819 machine.go:94] provisionDockerMachine start ...
	I0520 03:35:41.067906    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.068401    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.068415    7819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 03:35:41.153210    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 03:35:41.153243    7819 buildroot.go:166] provisioning hostname "stopped-upgrade-555000"
	I0520 03:35:41.153370    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.153617    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.153632    7819 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-555000 && echo "stopped-upgrade-555000" | sudo tee /etc/hostname
	I0520 03:35:41.228435    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-555000
	
	I0520 03:35:41.228517    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.228672    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.228680    7819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-555000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-555000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-555000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 03:35:41.294517    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 03:35:41.294528    7819 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18925-5286/.minikube CaCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18925-5286/.minikube}
	I0520 03:35:41.294535    7819 buildroot.go:174] setting up certificates
	I0520 03:35:41.294539    7819 provision.go:84] configureAuth start
	I0520 03:35:41.294547    7819 provision.go:143] copyHostCerts
	I0520 03:35:41.294624    7819 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem, removing ...
	I0520 03:35:41.294633    7819 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem
	I0520 03:35:41.294729    7819 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.pem (1078 bytes)
	I0520 03:35:41.294906    7819 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem, removing ...
	I0520 03:35:41.294910    7819 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem
	I0520 03:35:41.294958    7819 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/cert.pem (1123 bytes)
	I0520 03:35:41.295057    7819 exec_runner.go:144] found /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem, removing ...
	I0520 03:35:41.295061    7819 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem
	I0520 03:35:41.295104    7819 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18925-5286/.minikube/key.pem (1675 bytes)
	I0520 03:35:41.295184    7819 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-555000 san=[127.0.0.1 localhost minikube stopped-upgrade-555000]
	I0520 03:35:41.401606    7819 provision.go:177] copyRemoteCerts
	I0520 03:35:41.401643    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 03:35:41.401650    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:35:41.436524    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 03:35:41.443123    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 03:35:41.450353    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 03:35:41.457649    7819 provision.go:87] duration metric: took 163.108875ms to configureAuth
	I0520 03:35:41.457658    7819 buildroot.go:189] setting minikube options for container-runtime
	I0520 03:35:41.457757    7819 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:35:41.457795    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.457880    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.457885    7819 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 03:35:41.520430    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0520 03:35:41.520438    7819 buildroot.go:70] root file system type: tmpfs
	I0520 03:35:41.520490    7819 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 03:35:41.520534    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.520640    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.520673    7819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 03:35:41.587266    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 03:35:41.587321    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:41.587430    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:41.587438    7819 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 03:35:41.955373    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0520 03:35:41.955385    7819 machine.go:97] duration metric: took 887.639125ms to provisionDockerMachine
	I0520 03:35:41.955392    7819 start.go:293] postStartSetup for "stopped-upgrade-555000" (driver="qemu2")
	I0520 03:35:41.955398    7819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 03:35:41.955462    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 03:35:41.955471    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:35:41.988503    7819 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 03:35:41.989727    7819 info.go:137] Remote host: Buildroot 2021.02.12
	I0520 03:35:41.989735    7819 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18925-5286/.minikube/addons for local assets ...
	I0520 03:35:41.989822    7819 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18925-5286/.minikube/files for local assets ...
	I0520 03:35:41.989946    7819 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem -> 58182.pem in /etc/ssl/certs
	I0520 03:35:41.990079    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 03:35:41.993415    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem --> /etc/ssl/certs/58182.pem (1708 bytes)
	I0520 03:35:42.001539    7819 start.go:296] duration metric: took 46.139834ms for postStartSetup
	I0520 03:35:42.001560    7819 fix.go:56] duration metric: took 20.916973708s for fixHost
	I0520 03:35:42.001611    7819 main.go:141] libmachine: Using SSH client type: native
	I0520 03:35:42.001743    7819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b16900] 0x104b19160 <nil>  [] 0s} localhost 51268 <nil> <nil>}
	I0520 03:35:42.001747    7819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 03:35:42.068295    7819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201341.863333879
	
	I0520 03:35:42.068304    7819 fix.go:216] guest clock: 1716201341.863333879
	I0520 03:35:42.068308    7819 fix.go:229] Guest: 2024-05-20 03:35:41.863333879 -0700 PDT Remote: 2024-05-20 03:35:42.001562 -0700 PDT m=+21.018680417 (delta=-138.228121ms)
	I0520 03:35:42.068319    7819 fix.go:200] guest clock delta is within tolerance: -138.228121ms
	I0520 03:35:42.068322    7819 start.go:83] releasing machines lock for "stopped-upgrade-555000", held for 20.983748042s
	I0520 03:35:42.068387    7819 ssh_runner.go:195] Run: cat /version.json
	I0520 03:35:42.068397    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:35:42.068387    7819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 03:35:42.068486    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	W0520 03:35:42.068951    7819 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51268: connect: connection refused
	I0520 03:35:42.068973    7819 retry.go:31] will retry after 196.23682ms: dial tcp [::1]:51268: connect: connection refused
	W0520 03:35:42.304536    7819 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0520 03:35:42.304642    7819 ssh_runner.go:195] Run: systemctl --version
	I0520 03:35:42.307315    7819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 03:35:42.309771    7819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 03:35:42.309810    7819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 03:35:42.313577    7819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 03:35:42.319091    7819 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 03:35:42.319099    7819 start.go:494] detecting cgroup driver to use...
	I0520 03:35:42.319189    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:35:42.326818    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0520 03:35:42.330606    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 03:35:42.333920    7819 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 03:35:42.333950    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 03:35:42.337026    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:35:42.339687    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 03:35:42.342619    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 03:35:42.345817    7819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 03:35:42.348717    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 03:35:42.351569    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 03:35:42.354647    7819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 03:35:42.357967    7819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 03:35:42.360849    7819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 03:35:42.363343    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:42.440627    7819 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 03:35:42.446615    7819 start.go:494] detecting cgroup driver to use...
	I0520 03:35:42.446678    7819 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 03:35:42.455762    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:35:42.460647    7819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 03:35:42.470243    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 03:35:42.474841    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:35:42.479645    7819 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0520 03:35:42.518157    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 03:35:42.523547    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 03:35:42.529052    7819 ssh_runner.go:195] Run: which cri-dockerd
	I0520 03:35:42.530445    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 03:35:42.533492    7819 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 03:35:42.538504    7819 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 03:35:42.622726    7819 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 03:35:42.691906    7819 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 03:35:42.691982    7819 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 03:35:42.696938    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:42.773238    7819 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:35:43.948429    7819 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.175197s)
	I0520 03:35:43.948487    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 03:35:43.953358    7819 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0520 03:35:43.960630    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:35:43.965128    7819 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 03:35:44.044529    7819 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 03:35:44.119431    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:44.199341    7819 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 03:35:44.205895    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 03:35:44.211025    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:44.291758    7819 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 03:35:44.335643    7819 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 03:35:44.335723    7819 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 03:35:44.338032    7819 start.go:562] Will wait 60s for crictl version
	I0520 03:35:44.338073    7819 ssh_runner.go:195] Run: which crictl
	I0520 03:35:44.340245    7819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 03:35:44.357073    7819 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0520 03:35:44.357147    7819 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:35:44.376488    7819 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 03:35:44.397725    7819 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0520 03:35:44.397845    7819 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0520 03:35:44.399322    7819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:35:44.403154    7819 kubeadm.go:877] updating cluster {Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0520 03:35:44.403212    7819 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0520 03:35:44.403258    7819 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:35:44.414970    7819 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:35:44.414980    7819 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 03:35:44.415022    7819 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:35:44.418736    7819 ssh_runner.go:195] Run: which lz4
	I0520 03:35:44.420198    7819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 03:35:44.421373    7819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 03:35:44.421384    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0520 03:35:45.146021    7819 docker.go:649] duration metric: took 725.851792ms to copy over tarball
	I0520 03:35:45.146080    7819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 03:35:46.336944    7819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.190867541s)
	I0520 03:35:46.336957    7819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 03:35:46.352313    7819 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0520 03:35:46.355344    7819 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0520 03:35:46.360516    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:46.445583    7819 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 03:35:48.052559    7819 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.606988792s)
	I0520 03:35:48.052678    7819 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 03:35:48.064599    7819 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 03:35:48.064608    7819 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0520 03:35:48.064614    7819 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 03:35:48.070332    7819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.070345    7819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:48.070381    7819 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0520 03:35:48.070430    7819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.070490    7819 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.070513    7819 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.070491    7819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.070530    7819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.077786    7819 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0520 03:35:48.077911    7819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.077972    7819 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.078619    7819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.078623    7819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.078731    7819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.078778    7819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.078814    7819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:48.462707    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0520 03:35:48.469698    7819 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0520 03:35:48.469822    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.475973    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0520 03:35:48.484683    7819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0520 03:35:48.484712    7819 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.484766    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0520 03:35:48.485978    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.497785    7819 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0520 03:35:48.497804    7819 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0520 03:35:48.497785    7819 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0520 03:35:48.497857    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0520 03:35:48.497880    7819 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.497983    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0520 03:35:48.503719    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.509636    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0520 03:35:48.509674    7819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0520 03:35:48.509692    7819 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.509744    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0520 03:35:48.515250    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.530262    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0520 03:35:48.530379    7819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0520 03:35:48.531523    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0520 03:35:48.531600    7819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0520 03:35:48.534590    7819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0520 03:35:48.534602    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0520 03:35:48.534606    7819 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.534653    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0520 03:35:48.540711    7819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0520 03:35:48.540732    7819 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.540741    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0520 03:35:48.540760    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0520 03:35:48.540786    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0520 03:35:48.540732    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0520 03:35:48.540850    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0520 03:35:48.553625    7819 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0520 03:35:48.553643    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0520 03:35:48.556174    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0520 03:35:48.558695    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.593731    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0520 03:35:48.614818    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0520 03:35:48.614841    7819 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0520 03:35:48.614860    7819 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.614844    7819 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0520 03:35:48.614915    7819 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0520 03:35:48.614915    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0520 03:35:48.660528    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0520 03:35:48.660565    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0520 03:35:48.660673    7819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0520 03:35:48.662038    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0520 03:35:48.662046    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0520 03:35:48.818879    7819 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0520 03:35:48.818906    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0520 03:35:48.946186    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0520 03:35:48.996135    7819 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0520 03:35:48.996245    7819 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:49.007368    7819 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0520 03:35:49.007390    7819 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:49.007443    7819 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:35:49.021798    7819 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 03:35:49.021915    7819 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0520 03:35:49.023583    7819 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 03:35:49.023607    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0520 03:35:49.054837    7819 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 03:35:49.054852    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0520 03:35:49.281179    7819 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 03:35:49.281226    7819 cache_images.go:92] duration metric: took 1.216625375s to LoadCachedImages
	W0520 03:35:49.281263    7819 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0520 03:35:49.281270    7819 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0520 03:35:49.281323    7819 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-555000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 03:35:49.281387    7819 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 03:35:49.296102    7819 cni.go:84] Creating CNI manager for ""
	I0520 03:35:49.296115    7819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:35:49.296122    7819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 03:35:49.296130    7819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-555000 NodeName:stopped-upgrade-555000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 03:35:49.296194    7819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-555000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 03:35:49.296245    7819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0520 03:35:49.299067    7819 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 03:35:49.299101    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 03:35:49.302210    7819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0520 03:35:49.307259    7819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 03:35:49.312117    7819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0520 03:35:49.317414    7819 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0520 03:35:49.318607    7819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 03:35:49.322187    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:35:49.400267    7819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:35:49.405464    7819 certs.go:68] Setting up /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000 for IP: 10.0.2.15
	I0520 03:35:49.405473    7819 certs.go:194] generating shared ca certs ...
	I0520 03:35:49.405482    7819 certs.go:226] acquiring lock for ca certs: {Name:mk32e3e05b22049132d2a360697fa20a693ff13f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.405652    7819 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.key
	I0520 03:35:49.405705    7819 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.key
	I0520 03:35:49.405711    7819 certs.go:256] generating profile certs ...
	I0520 03:35:49.405782    7819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.key
	I0520 03:35:49.405798    7819 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f
	I0520 03:35:49.405809    7819 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0520 03:35:49.477219    7819 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f ...
	I0520 03:35:49.477233    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f: {Name:mkdda6f3ad96fcf46ee377b38b4e95938eea1041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.477565    7819 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f ...
	I0520 03:35:49.477574    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f: {Name:mk7ebeba82b864cfb00ad2530e5f8c957755d74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.477704    7819 certs.go:381] copying /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt.1114808f -> /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt
	I0520 03:35:49.477832    7819 certs.go:385] copying /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key.1114808f -> /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key
	I0520 03:35:49.477977    7819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/proxy-client.key
	I0520 03:35:49.478108    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818.pem (1338 bytes)
	W0520 03:35:49.478138    7819 certs.go:480] ignoring /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818_empty.pem, impossibly tiny 0 bytes
	I0520 03:35:49.478152    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 03:35:49.478176    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem (1078 bytes)
	I0520 03:35:49.478195    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem (1123 bytes)
	I0520 03:35:49.478216    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/key.pem (1675 bytes)
	I0520 03:35:49.478258    7819 certs.go:484] found cert: /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem (1708 bytes)
	I0520 03:35:49.478581    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 03:35:49.485833    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0520 03:35:49.493103    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 03:35:49.499921    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 03:35:49.506628    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 03:35:49.514155    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 03:35:49.521465    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 03:35:49.528466    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 03:35:49.535268    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/5818.pem --> /usr/share/ca-certificates/5818.pem (1338 bytes)
	I0520 03:35:49.542106    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/ssl/certs/58182.pem --> /usr/share/ca-certificates/58182.pem (1708 bytes)
	I0520 03:35:49.549434    7819 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 03:35:49.556109    7819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 03:35:49.561195    7819 ssh_runner.go:195] Run: openssl version
	I0520 03:35:49.563383    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5818.pem && ln -fs /usr/share/ca-certificates/5818.pem /etc/ssl/certs/5818.pem"
	I0520 03:35:49.566533    7819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5818.pem
	I0520 03:35:49.567969    7819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:19 /usr/share/ca-certificates/5818.pem
	I0520 03:35:49.567994    7819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5818.pem
	I0520 03:35:49.569650    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5818.pem /etc/ssl/certs/51391683.0"
	I0520 03:35:49.572508    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/58182.pem && ln -fs /usr/share/ca-certificates/58182.pem /etc/ssl/certs/58182.pem"
	I0520 03:35:49.575249    7819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/58182.pem
	I0520 03:35:49.576673    7819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:19 /usr/share/ca-certificates/58182.pem
	I0520 03:35:49.576691    7819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/58182.pem
	I0520 03:35:49.578451    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/58182.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 03:35:49.581880    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 03:35:49.584958    7819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:35:49.586397    7819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:35:49.586418    7819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 03:35:49.588289    7819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 03:35:49.591030    7819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 03:35:49.592527    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 03:35:49.594625    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 03:35:49.596720    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 03:35:49.598783    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 03:35:49.600957    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 03:35:49.602986    7819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 03:35:49.605065    7819 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51302 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0520 03:35:49.605136    7819 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:35:49.615516    7819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 03:35:49.618854    7819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 03:35:49.618860    7819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 03:35:49.618863    7819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 03:35:49.618884    7819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 03:35:49.621830    7819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 03:35:49.622137    7819 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-555000" does not appear in /Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:35:49.622230    7819 kubeconfig.go:62] /Users/jenkins/minikube-integration/18925-5286/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-555000" cluster setting kubeconfig missing "stopped-upgrade-555000" context setting]
	I0520 03:35:49.622417    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/kubeconfig: {Name:mk2c3e0adb489a0347b499d6142b492dee1b48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:35:49.622845    7819 kapi.go:59] client config for stopped-upgrade-555000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.key", CAFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ea0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:35:49.623271    7819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 03:35:49.625973    7819 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-555000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0520 03:35:49.625978    7819 kubeadm.go:1154] stopping kube-system containers ...
	I0520 03:35:49.626016    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 03:35:49.636733    7819 docker.go:483] Stopping containers: [6474d3cde87b 27d8c3bf7a0b d8573923fa37 30a7c27597f7 e20d655db97e 137ba8a1eae4 642180187ce6 7ba00f49f8cf]
	I0520 03:35:49.636800    7819 ssh_runner.go:195] Run: docker stop 6474d3cde87b 27d8c3bf7a0b d8573923fa37 30a7c27597f7 e20d655db97e 137ba8a1eae4 642180187ce6 7ba00f49f8cf
	I0520 03:35:49.647100    7819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 03:35:49.652906    7819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:35:49.655586    7819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 03:35:49.655592    7819 kubeadm.go:156] found existing configuration files:
	
	I0520 03:35:49.655616    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf
	I0520 03:35:49.658340    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 03:35:49.658365    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:35:49.661447    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf
	I0520 03:35:49.664049    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 03:35:49.664071    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:35:49.666571    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf
	I0520 03:35:49.669545    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 03:35:49.669565    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:35:49.672089    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf
	I0520 03:35:49.674563    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 03:35:49.674588    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:35:49.677584    7819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:35:49.680482    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:49.701467    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.281461    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.411335    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.432897    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 03:35:50.453414    7819 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:35:50.453499    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:35:50.955567    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:35:51.455427    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:35:51.459563    7819 api_server.go:72] duration metric: took 1.006167792s to wait for apiserver process to appear ...
	I0520 03:35:51.459572    7819 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:35:51.459582    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:35:56.460346    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:35:56.460395    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:01.461455    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:01.461484    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:06.461616    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:06.461704    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:11.462133    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:11.462181    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:16.462939    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:16.462965    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:21.463547    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:21.463611    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:26.464607    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:26.464666    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:31.465980    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:31.466021    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:36.467499    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:36.467562    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:41.467893    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:41.467935    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:46.470129    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:46.470209    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:51.472648    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:51.472836    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:36:51.486716    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:36:51.486801    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:36:51.497884    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:36:51.497967    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:36:51.508229    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:36:51.508302    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:51.519625    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:36:51.519709    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:51.530265    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:36:51.530331    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:51.540392    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:36:51.540456    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:51.550397    7819 logs.go:276] 0 containers: []
	W0520 03:36:51.550407    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:51.550465    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:51.560771    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:36:51.560788    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:36:51.560794    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:36:51.581009    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:36:51.581024    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:36:51.592701    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:36:51.592711    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:36:51.604036    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:51.604057    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:51.709058    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:36:51.709070    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:36:51.731223    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:36:51.731236    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:36:51.747295    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:36:51.747308    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:36:51.759377    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:36:51.759391    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:36:51.773682    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:51.773696    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:51.799445    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:51.799453    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:51.837704    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:36:51.837714    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:36:51.851418    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:36:51.851427    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:36:51.877777    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:36:51.877787    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:36:51.889059    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:36:51.889068    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:51.901337    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:51.901348    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:51.905238    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:36:51.905246    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:36:51.921509    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:36:51.921519    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:36:54.441282    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:36:59.443106    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:36:59.443348    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:36:59.462133    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:36:59.462224    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:36:59.480143    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:36:59.480211    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:36:59.494248    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:36:59.494323    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:36:59.504977    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:36:59.505043    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:36:59.514914    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:36:59.514977    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:36:59.525989    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:36:59.526064    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:36:59.536674    7819 logs.go:276] 0 containers: []
	W0520 03:36:59.536685    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:36:59.536736    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:36:59.548556    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:36:59.548577    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:36:59.548583    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:36:59.562805    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:36:59.562817    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:36:59.580469    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:36:59.580481    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:36:59.592443    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:36:59.592455    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:36:59.630359    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:36:59.630369    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:36:59.641305    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:36:59.641315    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:36:59.656256    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:36:59.656266    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:36:59.660708    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:36:59.660714    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:36:59.672815    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:36:59.672825    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:36:59.711851    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:36:59.711862    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:36:59.737076    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:36:59.737087    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:36:59.752047    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:36:59.752059    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:36:59.770023    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:36:59.770033    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:36:59.781448    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:36:59.781459    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:36:59.805837    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:36:59.805846    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:36:59.819745    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:36:59.819756    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:36:59.833718    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:36:59.833727    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:02.349780    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:07.352134    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:07.352361    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:07.376061    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:07.376161    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:07.392463    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:07.392542    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:07.404798    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:07.404865    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:07.419451    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:07.419520    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:07.430943    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:07.431016    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:07.449093    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:07.449162    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:07.459177    7819 logs.go:276] 0 containers: []
	W0520 03:37:07.459187    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:07.459244    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:07.472545    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:07.472565    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:07.472570    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:07.497039    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:07.497049    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:07.533198    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:07.533207    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:07.570036    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:07.570046    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:07.585827    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:07.585837    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:07.599258    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:07.599269    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:07.613774    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:07.613783    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:07.627544    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:07.627554    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:07.638978    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:07.638992    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:07.652716    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:07.652724    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:07.677737    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:07.677748    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:07.702374    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:07.702385    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:07.716623    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:07.716632    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:07.721226    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:07.721235    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:07.736874    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:07.736889    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:07.748517    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:07.748527    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:07.759318    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:07.759329    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:10.273848    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:15.276145    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:15.276498    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:15.308652    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:15.308784    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:15.330230    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:15.330317    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:15.343267    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:15.343334    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:15.355409    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:15.355484    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:15.366939    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:15.367009    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:15.377739    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:15.377807    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:15.388799    7819 logs.go:276] 0 containers: []
	W0520 03:37:15.388812    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:15.388868    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:15.399284    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:15.399301    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:15.399306    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:15.411269    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:15.411279    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:15.422492    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:15.422502    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:15.436245    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:15.436254    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:15.449348    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:15.449359    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:15.466723    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:15.466734    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:15.484258    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:15.484272    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:15.510224    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:15.510235    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:15.544832    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:15.544843    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:15.559265    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:15.559276    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:15.573716    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:15.573727    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:15.611676    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:15.611683    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:15.625408    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:15.625419    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:15.638119    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:15.638128    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:15.649576    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:15.649588    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:15.653832    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:15.653840    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:15.684669    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:15.684679    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:18.201536    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:23.203870    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:23.204284    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:23.240794    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:23.240925    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:23.260440    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:23.260540    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:23.276462    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:23.276540    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:23.288712    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:23.288779    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:23.299075    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:23.299143    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:23.310894    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:23.310960    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:23.326132    7819 logs.go:276] 0 containers: []
	W0520 03:37:23.326143    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:23.326201    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:23.336974    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:23.336991    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:23.336996    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:23.375948    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:23.375961    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:23.391006    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:23.391016    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:23.403083    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:23.403094    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:23.427761    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:23.427772    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:23.431782    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:23.431791    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:23.443007    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:23.443017    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:23.458388    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:23.458399    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:23.477296    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:23.477309    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:23.488878    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:23.488890    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:23.501373    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:23.501385    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:23.538942    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:23.538952    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:23.564866    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:23.564880    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:23.578657    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:23.578670    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:23.593233    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:23.593244    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:23.607765    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:23.607776    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:23.619365    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:23.619374    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:26.133992    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:31.136310    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:31.136558    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:31.164609    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:31.164753    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:31.185244    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:31.185351    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:31.198353    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:31.198440    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:31.210117    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:31.210198    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:31.220723    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:31.220802    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:31.231214    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:31.231285    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:31.241548    7819 logs.go:276] 0 containers: []
	W0520 03:37:31.241568    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:31.241638    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:31.252319    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:31.252338    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:31.252343    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:31.267034    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:31.267044    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:31.282224    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:31.282235    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:31.297511    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:31.297521    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:31.309370    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:31.309380    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:31.321713    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:31.321723    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:31.346239    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:31.346251    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:31.358543    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:31.358553    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:31.374848    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:31.374858    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:31.398409    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:31.398415    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:31.431430    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:31.431445    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:31.445056    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:31.445067    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:31.456588    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:31.456600    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:31.469388    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:31.469401    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:31.483058    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:31.483067    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:31.486994    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:31.487001    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:31.504492    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:31.504501    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:34.045299    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:39.047528    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:39.047692    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:39.062200    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:39.062278    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:39.074009    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:39.074080    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:39.084469    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:39.084541    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:39.095315    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:39.095388    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:39.110561    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:39.110629    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:39.121069    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:39.121139    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:39.131670    7819 logs.go:276] 0 containers: []
	W0520 03:37:39.131691    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:39.131786    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:39.142674    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:39.142690    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:39.142696    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:39.146625    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:39.146630    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:39.158169    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:39.158178    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:39.176154    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:39.176163    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:39.189985    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:39.189998    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:39.203909    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:39.203918    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:39.219028    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:39.219037    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:39.232948    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:39.232957    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:39.247858    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:39.247867    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:39.284425    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:39.284432    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:39.320656    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:39.320664    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:39.332373    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:39.332385    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:39.344180    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:39.344190    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:39.369157    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:39.369167    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:39.394307    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:39.394318    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:39.405756    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:39.405764    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:39.430950    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:39.430960    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:41.958387    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:46.959524    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:46.959767    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:46.977025    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:46.977104    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:46.990995    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:46.991069    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:47.004804    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:47.004876    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:47.015019    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:47.015088    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:47.026239    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:47.026310    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:47.037103    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:47.037167    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:47.047219    7819 logs.go:276] 0 containers: []
	W0520 03:37:47.047233    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:47.047288    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:47.057657    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:47.057676    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:47.057681    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:47.068816    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:47.068826    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:47.080187    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:47.080197    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:47.097777    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:47.097791    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:47.113045    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:47.113056    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:47.127324    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:47.127333    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:47.141463    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:47.141473    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:47.164376    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:47.164384    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:47.168552    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:47.168558    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:47.202838    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:47.202853    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:47.216905    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:47.216915    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:47.241838    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:47.241844    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:47.256596    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:47.256609    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:47.269674    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:47.269690    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:47.310016    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:47.310027    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:47.326628    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:47.326646    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:47.339340    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:47.339352    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:49.852870    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:37:54.854555    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:37:54.854664    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:37:54.866313    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:37:54.866389    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:37:54.879739    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:37:54.879817    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:37:54.890099    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:37:54.890161    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:37:54.900291    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:37:54.900358    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:37:54.910695    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:37:54.910763    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:37:54.921375    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:37:54.921435    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:37:54.931362    7819 logs.go:276] 0 containers: []
	W0520 03:37:54.931372    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:37:54.931430    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:37:54.941716    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:37:54.941734    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:37:54.941740    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:37:54.966968    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:37:54.966979    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:37:54.981385    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:37:54.981394    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:37:54.993049    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:37:54.993059    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:37:55.030521    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:37:55.030536    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:37:55.041980    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:37:55.041992    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:37:55.055908    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:37:55.055918    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:37:55.079855    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:37:55.079862    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:37:55.116904    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:37:55.116916    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:37:55.132179    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:37:55.132189    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:37:55.145271    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:37:55.145282    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:37:55.161871    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:37:55.161883    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:37:55.174745    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:37:55.174757    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:37:55.193725    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:37:55.193736    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:37:55.206245    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:37:55.206258    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:37:55.211233    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:37:55.211241    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:37:55.223672    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:37:55.223681    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:37:57.740100    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:02.742564    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:02.742838    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:02.762771    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:02.762871    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:02.777566    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:02.777651    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:02.789114    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:02.789188    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:02.799806    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:02.799883    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:02.810723    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:02.810798    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:02.821523    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:02.821601    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:02.832150    7819 logs.go:276] 0 containers: []
	W0520 03:38:02.832164    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:02.832225    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:02.842550    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:02.842567    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:02.842573    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:02.853706    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:02.853716    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:02.871245    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:02.871258    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:02.895258    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:02.895265    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:02.899443    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:02.899452    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:02.911139    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:02.911149    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:02.950523    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:02.950543    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:02.986834    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:02.986841    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:02.998774    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:02.998783    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:03.011174    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:03.011183    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:03.025661    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:03.025669    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:03.063327    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:03.063339    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:03.078409    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:03.078419    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:03.093554    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:03.093564    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:03.105657    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:03.105665    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:03.121926    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:03.121939    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:03.136817    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:03.136830    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:05.651775    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:10.653998    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:10.654202    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:10.673093    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:10.673177    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:10.686252    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:10.686331    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:10.697935    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:10.698010    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:10.708509    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:10.708576    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:10.720006    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:10.720070    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:10.730332    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:10.730392    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:10.741006    7819 logs.go:276] 0 containers: []
	W0520 03:38:10.741018    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:10.741080    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:10.751075    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:10.751089    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:10.751094    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:10.791300    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:10.791310    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:10.816644    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:10.816657    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:10.832384    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:10.832396    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:10.852172    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:10.852184    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:10.864797    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:10.864810    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:10.877268    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:10.877282    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:10.889628    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:10.889639    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:10.894639    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:10.894647    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:10.909514    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:10.909524    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:10.921544    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:10.921554    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:10.934625    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:10.934637    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:10.949871    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:10.949887    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:10.987610    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:10.987623    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:11.002199    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:11.004387    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:11.021059    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:11.021070    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:11.047275    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:11.047286    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:13.562287    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:18.563301    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:18.563433    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:18.576331    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:18.576414    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:18.587541    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:18.587609    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:18.604265    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:18.604331    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:18.615075    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:18.615146    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:18.626730    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:18.626803    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:18.638137    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:18.638206    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:18.649562    7819 logs.go:276] 0 containers: []
	W0520 03:38:18.649576    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:18.649637    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:18.660832    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:18.660849    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:18.660855    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:18.676381    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:18.676397    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:18.692351    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:18.692359    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:18.704439    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:18.704449    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:18.718894    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:18.718904    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:18.733433    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:18.733444    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:18.746419    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:18.746431    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:18.758389    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:18.758400    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:18.798236    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:18.798249    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:18.824300    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:18.824310    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:18.841602    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:18.841611    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:18.861123    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:18.861139    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:18.887176    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:18.887185    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:18.899666    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:18.899674    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:18.903938    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:18.903947    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:18.939979    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:18.939990    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:18.951296    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:18.951307    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:21.471715    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:26.473897    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:26.473966    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:26.485808    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:26.485882    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:26.498292    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:26.498378    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:26.509606    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:26.509679    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:26.521626    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:26.521701    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:26.533262    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:26.533329    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:26.544947    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:26.545026    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:26.557319    7819 logs.go:276] 0 containers: []
	W0520 03:38:26.557330    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:26.557389    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:26.568609    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:26.568625    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:26.568631    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:26.608036    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:26.608049    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:26.647013    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:26.647027    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:26.673023    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:26.673033    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:26.687826    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:26.687839    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:26.706024    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:26.706035    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:26.718778    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:26.718790    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:26.744437    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:26.744453    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:26.749098    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:26.749108    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:26.764201    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:26.764212    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:26.776286    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:26.776297    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:26.787904    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:26.787915    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:26.802504    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:26.802513    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:26.817083    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:26.817093    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:26.827748    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:26.827761    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:26.842975    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:26.842985    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:26.854324    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:26.854334    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:29.367912    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:34.370207    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:34.370383    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:34.381759    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:34.381823    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:34.393253    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:34.393316    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:34.404638    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:34.404701    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:34.416365    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:34.416433    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:34.427287    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:34.427352    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:34.438456    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:34.438528    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:34.449869    7819 logs.go:276] 0 containers: []
	W0520 03:38:34.449886    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:34.449943    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:34.461981    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:34.461999    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:34.462004    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:34.478408    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:34.478421    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:34.491424    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:34.491434    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:34.503827    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:34.503839    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:34.516397    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:34.516407    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:34.556882    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:34.556892    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:34.561396    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:34.561402    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:34.600950    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:34.600963    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:34.615395    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:34.615411    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:34.629561    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:34.629572    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:34.644720    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:34.644732    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:34.658829    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:34.658838    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:34.682720    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:34.682728    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:34.696094    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:34.696106    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:34.707940    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:34.707954    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:34.733032    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:34.733047    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:34.744997    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:34.745007    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:37.268863    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:42.270363    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:42.270445    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:42.282706    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:42.282778    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:42.294581    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:42.294652    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:42.307138    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:42.307210    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:42.318502    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:42.318574    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:42.329855    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:42.329922    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:42.340829    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:42.340897    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:42.351905    7819 logs.go:276] 0 containers: []
	W0520 03:38:42.351916    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:42.351977    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:42.363506    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:42.363524    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:42.363530    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:42.381760    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:42.381770    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:42.394778    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:42.394789    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:42.407817    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:42.407832    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:42.412110    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:42.412118    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:42.436771    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:42.436785    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:42.451568    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:42.451578    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:42.468333    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:42.468346    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:42.508478    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:42.508505    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:42.520435    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:42.520445    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:42.535867    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:42.535876    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:42.547081    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:42.547092    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:42.562036    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:42.562045    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:42.574995    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:42.575007    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:42.609662    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:42.609672    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:42.623357    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:42.623369    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:42.637391    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:42.637401    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:45.162827    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:50.165092    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:50.165235    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:50.176647    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:50.176720    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:50.190033    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:50.190171    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:50.201965    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:50.202035    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:50.218426    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:50.218500    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:50.229689    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:50.229758    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:50.244683    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:50.244758    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:50.258109    7819 logs.go:276] 0 containers: []
	W0520 03:38:50.258127    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:50.258197    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:50.270182    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:50.270211    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:50.270234    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:50.285197    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:50.285207    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:50.298387    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:50.298399    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:50.323851    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:50.323862    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:50.339010    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:50.339023    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:50.366480    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:50.366494    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:50.387510    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:50.387528    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:50.404093    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:50.404109    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:50.442877    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:50.442890    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:50.447140    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:50.447146    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:50.485392    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:50.485403    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:50.499513    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:50.499524    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:50.510584    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:50.510596    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:50.522836    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:50.522846    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:50.537590    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:50.537600    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:50.549476    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:50.549486    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:50.568699    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:50.568708    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:38:53.094791    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:38:58.097104    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:38:58.097191    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:38:58.109260    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:38:58.109333    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:38:58.121003    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:38:58.121083    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:38:58.132262    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:38:58.132333    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:38:58.143525    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:38:58.143601    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:38:58.155554    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:38:58.155623    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:38:58.167197    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:38:58.167267    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:38:58.177927    7819 logs.go:276] 0 containers: []
	W0520 03:38:58.177940    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:38:58.178000    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:38:58.188848    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:38:58.188868    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:38:58.188875    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:38:58.203248    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:38:58.203256    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:38:58.220058    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:38:58.220070    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:38:58.232622    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:38:58.232634    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:38:58.248094    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:38:58.248103    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:38:58.264593    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:38:58.264605    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:38:58.277129    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:38:58.277139    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:38:58.289127    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:38:58.289138    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:38:58.301806    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:38:58.301818    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:38:58.320855    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:38:58.320871    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:38:58.359082    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:38:58.359100    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:38:58.364992    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:38:58.365007    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:38:58.403045    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:38:58.403057    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:38:58.416787    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:38:58.416800    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:38:58.441461    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:38:58.441472    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:38:58.453607    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:38:58.453618    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:38:58.468914    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:38:58.468924    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:00.994271    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:05.996657    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:05.996742    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:06.008160    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:06.008231    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:06.019479    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:06.019550    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:06.030312    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:06.030381    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:06.041366    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:06.041435    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:06.052132    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:06.052200    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:06.063348    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:06.063417    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:06.075152    7819 logs.go:276] 0 containers: []
	W0520 03:39:06.075163    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:06.075222    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:06.088636    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:06.088656    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:06.088662    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:06.114849    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:06.114864    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:06.127403    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:06.127414    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:06.140364    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:06.140375    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:06.152273    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:06.152281    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:06.171127    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:06.171135    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:06.185861    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:06.185873    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:06.202807    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:06.202821    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:06.218407    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:06.218421    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:06.230319    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:06.230331    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:06.268165    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:06.268182    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:06.304613    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:06.304626    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:06.319027    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:06.319041    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:06.330827    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:06.330840    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:06.334973    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:06.334982    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:06.349281    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:06.349290    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:06.360688    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:06.360699    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:08.886507    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:13.889159    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:13.889322    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:13.900583    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:13.900654    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:13.914618    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:13.914684    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:13.925362    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:13.925438    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:13.936731    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:13.936807    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:13.953116    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:13.953187    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:13.964900    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:13.964966    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:13.976013    7819 logs.go:276] 0 containers: []
	W0520 03:39:13.976025    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:13.976087    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:13.987140    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:13.987158    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:13.987163    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:13.999415    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:13.999428    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:14.012493    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:14.012505    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:14.017749    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:14.017761    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:14.056596    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:14.056610    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:14.070938    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:14.070948    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:14.085546    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:14.085555    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:14.110973    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:14.110984    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:14.129218    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:14.129227    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:14.141378    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:14.141392    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:14.156556    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:14.156566    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:14.171946    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:14.171957    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:14.183473    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:14.183483    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:14.197777    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:14.197787    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:14.221264    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:14.221272    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:14.259165    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:14.259172    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:14.274660    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:14.274670    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:16.794981    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:21.797106    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:21.797198    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:21.808762    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:21.808837    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:21.820873    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:21.820950    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:21.832187    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:21.832257    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:21.849762    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:21.849839    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:21.861212    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:21.861287    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:21.872959    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:21.873032    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:21.890430    7819 logs.go:276] 0 containers: []
	W0520 03:39:21.890441    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:21.890500    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:21.901576    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:21.901597    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:21.901603    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:21.936204    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:21.936215    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:21.961085    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:21.961100    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:21.984652    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:21.984660    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:21.999330    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:21.999340    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:22.010944    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:22.010956    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:22.024877    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:22.024886    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:22.036843    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:22.036853    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:22.055278    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:22.055292    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:22.067540    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:22.067552    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:22.085120    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:22.085131    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:22.097161    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:22.097171    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:22.135723    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:22.135732    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:22.139784    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:22.139792    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:22.153761    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:22.153776    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:22.165605    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:22.165616    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:22.182337    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:22.182348    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:24.696456    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:29.698559    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:29.698661    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:29.710618    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:29.710690    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:29.722174    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:29.722246    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:29.733379    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:29.733474    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:29.744818    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:29.744893    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:29.755743    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:29.755811    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:29.766909    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:29.766981    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:29.778469    7819 logs.go:276] 0 containers: []
	W0520 03:39:29.778484    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:29.778547    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:29.790606    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:29.790623    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:29.790628    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:29.805392    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:29.805403    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:29.816963    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:29.816977    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:29.854517    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:29.854527    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:29.866426    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:29.866440    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:29.883643    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:29.883652    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:29.908071    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:29.908081    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:29.919829    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:29.919839    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:29.942901    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:29.942911    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:29.958009    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:29.958019    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:29.962588    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:29.962594    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:29.982536    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:29.982545    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:29.993534    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:29.993544    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:30.004158    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:30.004170    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:30.016496    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:30.016506    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:30.051007    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:30.051023    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:30.064768    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:30.064782    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:32.581259    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:37.583473    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:37.583594    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:37.594975    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:37.595066    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:37.610441    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:37.610512    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:37.621869    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:37.621945    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:37.638494    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:37.638571    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:37.662027    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:37.662102    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:37.680377    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:37.680459    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:37.694338    7819 logs.go:276] 0 containers: []
	W0520 03:39:37.694350    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:37.694414    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:37.706412    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:37.706430    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:37.706436    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:37.718431    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:37.718441    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:37.742048    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:37.742057    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:37.758264    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:37.758273    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:37.769754    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:37.769764    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:37.791656    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:37.791663    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:37.806029    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:37.806039    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:37.820262    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:37.820272    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:37.838384    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:37.838394    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:37.875207    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:37.875216    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:37.889397    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:37.889406    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:37.914122    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:37.914133    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:37.926366    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:37.926375    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:37.941966    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:37.941975    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:37.946786    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:37.946792    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:37.982358    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:37.982372    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:37.996905    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:37.996915    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:40.510639    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:45.512795    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:45.512896    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:39:45.524099    7819 logs.go:276] 2 containers: [bfe599938d52 30a7c27597f7]
	I0520 03:39:45.524182    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:39:45.539868    7819 logs.go:276] 2 containers: [9ac3fbc92acb 6474d3cde87b]
	I0520 03:39:45.539952    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:39:45.550909    7819 logs.go:276] 1 containers: [7b517fea756c]
	I0520 03:39:45.550974    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:39:45.561999    7819 logs.go:276] 2 containers: [a8fe7eec658c d8573923fa37]
	I0520 03:39:45.562073    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:39:45.572976    7819 logs.go:276] 1 containers: [1fcacf578749]
	I0520 03:39:45.573041    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:39:45.588582    7819 logs.go:276] 2 containers: [0eda86c525c3 27d8c3bf7a0b]
	I0520 03:39:45.588650    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:39:45.599325    7819 logs.go:276] 0 containers: []
	W0520 03:39:45.599336    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:39:45.599396    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:39:45.615748    7819 logs.go:276] 2 containers: [32811f364fc6 418718198a05]
	I0520 03:39:45.615764    7819 logs.go:123] Gathering logs for kube-apiserver [30a7c27597f7] ...
	I0520 03:39:45.615769    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a7c27597f7"
	I0520 03:39:45.663617    7819 logs.go:123] Gathering logs for kube-scheduler [a8fe7eec658c] ...
	I0520 03:39:45.663627    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8fe7eec658c"
	I0520 03:39:45.680200    7819 logs.go:123] Gathering logs for kube-proxy [1fcacf578749] ...
	I0520 03:39:45.680216    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fcacf578749"
	I0520 03:39:45.691504    7819 logs.go:123] Gathering logs for kube-controller-manager [0eda86c525c3] ...
	I0520 03:39:45.691514    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eda86c525c3"
	I0520 03:39:45.708289    7819 logs.go:123] Gathering logs for kube-apiserver [bfe599938d52] ...
	I0520 03:39:45.708300    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bfe599938d52"
	I0520 03:39:45.722435    7819 logs.go:123] Gathering logs for kube-controller-manager [27d8c3bf7a0b] ...
	I0520 03:39:45.722443    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27d8c3bf7a0b"
	I0520 03:39:45.736744    7819 logs.go:123] Gathering logs for storage-provisioner [418718198a05] ...
	I0520 03:39:45.736754    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 418718198a05"
	I0520 03:39:45.747627    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:39:45.747637    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:39:45.785586    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:39:45.785593    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:39:45.790112    7819 logs.go:123] Gathering logs for etcd [9ac3fbc92acb] ...
	I0520 03:39:45.790118    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac3fbc92acb"
	I0520 03:39:45.814739    7819 logs.go:123] Gathering logs for etcd [6474d3cde87b] ...
	I0520 03:39:45.814755    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6474d3cde87b"
	I0520 03:39:45.837068    7819 logs.go:123] Gathering logs for coredns [7b517fea756c] ...
	I0520 03:39:45.837077    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b517fea756c"
	I0520 03:39:45.850369    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:39:45.850379    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:39:45.862918    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:39:45.862928    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:39:45.898822    7819 logs.go:123] Gathering logs for kube-scheduler [d8573923fa37] ...
	I0520 03:39:45.898830    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8573923fa37"
	I0520 03:39:45.913938    7819 logs.go:123] Gathering logs for storage-provisioner [32811f364fc6] ...
	I0520 03:39:45.913947    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32811f364fc6"
	I0520 03:39:45.925712    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:39:45.925725    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:39:48.450891    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:39:53.451340    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:39:53.451374    7819 kubeadm.go:591] duration metric: took 4m3.83704325s to restartPrimaryControlPlane
	W0520 03:39:53.451403    7819 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 03:39:53.451417    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0520 03:39:54.453466    7819 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002055708s)
	I0520 03:39:54.453530    7819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 03:39:54.458771    7819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 03:39:54.461757    7819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 03:39:54.464513    7819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 03:39:54.464521    7819 kubeadm.go:156] found existing configuration files:
	
	I0520 03:39:54.464544    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf
	I0520 03:39:54.466978    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 03:39:54.467002    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 03:39:54.469697    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf
	I0520 03:39:54.472832    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 03:39:54.472854    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 03:39:54.475516    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf
	I0520 03:39:54.478042    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 03:39:54.478063    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 03:39:54.481129    7819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf
	I0520 03:39:54.483671    7819 kubeadm.go:162] "https://control-plane.minikube.internal:51302" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51302 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 03:39:54.483692    7819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 03:39:54.486565    7819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 03:39:54.504973    7819 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0520 03:39:54.505047    7819 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 03:39:54.555266    7819 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 03:39:54.555356    7819 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 03:39:54.555419    7819 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 03:39:54.604032    7819 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 03:39:54.608150    7819 out.go:204]   - Generating certificates and keys ...
	I0520 03:39:54.608185    7819 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 03:39:54.608215    7819 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 03:39:54.608249    7819 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 03:39:54.608289    7819 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 03:39:54.608327    7819 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 03:39:54.608353    7819 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 03:39:54.608386    7819 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 03:39:54.608418    7819 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 03:39:54.608461    7819 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 03:39:54.608498    7819 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 03:39:54.608520    7819 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 03:39:54.608551    7819 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 03:39:54.809249    7819 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 03:39:54.877233    7819 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 03:39:55.039889    7819 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 03:39:55.238747    7819 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 03:39:55.267985    7819 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 03:39:55.268319    7819 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 03:39:55.268340    7819 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 03:39:55.350347    7819 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 03:39:55.353371    7819 out.go:204]   - Booting up control plane ...
	I0520 03:39:55.353501    7819 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 03:39:55.353582    7819 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 03:39:55.353715    7819 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 03:39:55.359486    7819 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 03:39:55.360379    7819 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 03:39:59.862464    7819 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501729 seconds
	I0520 03:39:59.862544    7819 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 03:39:59.867804    7819 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 03:40:00.377727    7819 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 03:40:00.377849    7819 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-555000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 03:40:00.883816    7819 kubeadm.go:309] [bootstrap-token] Using token: yby7v6.tsn74ll1eer8ce0x
	I0520 03:40:00.889853    7819 out.go:204]   - Configuring RBAC rules ...
	I0520 03:40:00.889921    7819 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 03:40:00.889971    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 03:40:00.897247    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 03:40:00.898140    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 03:40:00.899017    7819 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 03:40:00.899825    7819 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 03:40:00.903118    7819 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 03:40:01.083587    7819 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 03:40:01.290485    7819 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 03:40:01.290505    7819 kubeadm.go:309] 
	I0520 03:40:01.290595    7819 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 03:40:01.290608    7819 kubeadm.go:309] 
	I0520 03:40:01.290722    7819 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 03:40:01.290735    7819 kubeadm.go:309] 
	I0520 03:40:01.290773    7819 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 03:40:01.290849    7819 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 03:40:01.290913    7819 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 03:40:01.290924    7819 kubeadm.go:309] 
	I0520 03:40:01.290997    7819 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 03:40:01.291004    7819 kubeadm.go:309] 
	I0520 03:40:01.291061    7819 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 03:40:01.291070    7819 kubeadm.go:309] 
	I0520 03:40:01.291139    7819 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 03:40:01.291311    7819 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 03:40:01.291390    7819 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 03:40:01.291405    7819 kubeadm.go:309] 
	I0520 03:40:01.291454    7819 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 03:40:01.291563    7819 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 03:40:01.291569    7819 kubeadm.go:309] 
	I0520 03:40:01.291612    7819 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yby7v6.tsn74ll1eer8ce0x \
	I0520 03:40:01.291662    7819 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 \
	I0520 03:40:01.291671    7819 kubeadm.go:309] 	--control-plane 
	I0520 03:40:01.291674    7819 kubeadm.go:309] 
	I0520 03:40:01.291709    7819 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 03:40:01.291712    7819 kubeadm.go:309] 
	I0520 03:40:01.291749    7819 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yby7v6.tsn74ll1eer8ce0x \
	I0520 03:40:01.291804    7819 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0617754ec982b7bdd78f4ed0aba70166512fc8246726a994c7e66f37d0b234c1 
	I0520 03:40:01.291861    7819 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 03:40:01.291873    7819 cni.go:84] Creating CNI manager for ""
	I0520 03:40:01.291881    7819 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:40:01.298762    7819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 03:40:01.301779    7819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 03:40:01.305305    7819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 03:40:01.310856    7819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 03:40:01.310903    7819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 03:40:01.310991    7819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-555000 minikube.k8s.io/updated_at=2024_05_20T03_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=stopped-upgrade-555000 minikube.k8s.io/primary=true
	I0520 03:40:01.318852    7819 ops.go:34] apiserver oom_adj: -16
	I0520 03:40:01.358893    7819 kubeadm.go:1107] duration metric: took 48.02525ms to wait for elevateKubeSystemPrivileges
	W0520 03:40:01.358944    7819 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 03:40:01.358949    7819 kubeadm.go:393] duration metric: took 4m11.758568667s to StartCluster
	I0520 03:40:01.358959    7819 settings.go:142] acquiring lock: {Name:mkc3af27fbea4a81f456d1d023b17ad3b4bc78ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:40:01.359047    7819 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:40:01.359443    7819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/kubeconfig: {Name:mk2c3e0adb489a0347b499d6142b492dee1b48dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:40:01.359641    7819 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:40:01.363736    7819 out.go:177] * Verifying Kubernetes components...
	I0520 03:40:01.359651    7819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 03:40:01.359716    7819 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:40:01.373797    7819 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-555000"
	I0520 03:40:01.373814    7819 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-555000"
	W0520 03:40:01.373819    7819 addons.go:243] addon storage-provisioner should already be in state true
	I0520 03:40:01.373830    7819 host.go:66] Checking if "stopped-upgrade-555000" exists ...
	I0520 03:40:01.373852    7819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 03:40:01.373866    7819 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-555000"
	I0520 03:40:01.373875    7819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-555000"
	I0520 03:40:01.375066    7819 kapi.go:59] client config for stopped-upgrade-555000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/stopped-upgrade-555000/client.key", CAFile:"/Users/jenkins/minikube-integration/18925-5286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105ea0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 03:40:01.375187    7819 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-555000"
	W0520 03:40:01.375191    7819 addons.go:243] addon default-storageclass should already be in state true
	I0520 03:40:01.375199    7819 host.go:66] Checking if "stopped-upgrade-555000" exists ...
	I0520 03:40:01.379768    7819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 03:40:01.383780    7819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:40:01.383786    7819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 03:40:01.383794    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:40:01.384435    7819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 03:40:01.384439    7819 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 03:40:01.384442    7819 sshutil.go:53] new ssh client: &{IP:localhost Port:51268 SSHKeyPath:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/stopped-upgrade-555000/id_rsa Username:docker}
	I0520 03:40:01.463302    7819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 03:40:01.468038    7819 api_server.go:52] waiting for apiserver process to appear ...
	I0520 03:40:01.468078    7819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 03:40:01.471825    7819 api_server.go:72] duration metric: took 112.174792ms to wait for apiserver process to appear ...
	I0520 03:40:01.471832    7819 api_server.go:88] waiting for apiserver healthz status ...
	I0520 03:40:01.471839    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:01.531605    7819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 03:40:01.532567    7819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 03:40:06.473942    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:06.473992    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:11.474225    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:11.474264    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:16.474530    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:16.474575    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:21.475411    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:21.475440    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:26.476057    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:26.476103    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:31.477049    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:31.477090    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0520 03:40:31.876066    7819 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0520 03:40:31.880112    7819 out.go:177] * Enabled addons: storage-provisioner
	I0520 03:40:31.887987    7819 addons.go:505] duration metric: took 30.528901208s for enable addons: enabled=[storage-provisioner]
	I0520 03:40:36.478348    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:36.478400    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:41.479991    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:41.480016    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:46.481915    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:46.481974    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:51.484280    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:51.484304    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:40:56.486422    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:40:56.486469    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:01.488686    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:01.488834    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:01.510113    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:01.510186    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:01.520877    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:01.520950    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:01.531513    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:01.531576    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:01.542276    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:01.542343    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:01.552650    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:01.552717    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:01.563068    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:01.563144    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:01.573419    7819 logs.go:276] 0 containers: []
	W0520 03:41:01.573429    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:01.573478    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:01.584912    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:01.584928    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:01.584933    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:01.598751    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:01.598763    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:01.610138    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:01.610148    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:01.621513    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:01.621523    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:01.637339    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:01.637352    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:01.648946    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:01.648958    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:01.683392    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:01.683399    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:01.717286    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:01.717296    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:01.731629    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:01.731642    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:01.749038    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:01.749049    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:01.760633    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:01.760645    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:01.764863    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:01.764872    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:01.776255    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:01.776266    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:04.303091    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:09.304904    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:09.305418    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:09.341900    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:09.342041    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:09.364169    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:09.364283    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:09.381714    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:09.381791    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:09.393591    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:09.393652    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:09.404265    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:09.404334    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:09.414749    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:09.414809    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:09.425106    7819 logs.go:276] 0 containers: []
	W0520 03:41:09.425119    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:09.425170    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:09.436629    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:09.436644    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:09.436650    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:09.448313    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:09.448325    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:09.459873    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:09.459886    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:09.482962    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:09.482969    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:09.487475    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:09.487483    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:09.507212    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:09.507224    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:09.523705    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:09.523715    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:09.534993    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:09.535005    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:09.550121    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:09.550131    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:09.583622    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:09.583629    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:09.618199    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:09.618211    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:09.630050    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:09.630062    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:09.647339    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:09.647351    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:12.160371    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:17.163235    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:17.163662    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:17.217043    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:17.217162    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:17.235653    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:17.235724    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:17.248496    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:17.248570    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:17.259969    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:17.260043    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:17.270400    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:17.270470    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:17.281143    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:17.281216    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:17.291488    7819 logs.go:276] 0 containers: []
	W0520 03:41:17.291498    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:17.291551    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:17.302051    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:17.302066    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:17.302071    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:17.335528    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:17.335536    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:17.339583    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:17.339592    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:17.374982    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:17.374996    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:17.389222    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:17.389232    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:17.401203    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:17.401213    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:17.425236    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:17.425242    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:17.439091    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:17.439102    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:17.450614    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:17.450625    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:17.466427    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:17.466438    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:17.478473    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:17.478482    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:17.495753    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:17.495764    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:17.507043    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:17.507052    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:20.020471    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:25.023627    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:25.024096    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:25.063077    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:25.063203    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:25.084996    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:25.085104    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:25.101220    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:25.101300    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:25.114041    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:25.114114    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:25.124998    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:25.125066    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:25.135309    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:25.135392    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:25.145638    7819 logs.go:276] 0 containers: []
	W0520 03:41:25.145648    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:25.145703    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:25.155985    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:25.155999    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:25.156004    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:25.167964    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:25.167975    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:25.187033    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:25.187047    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:25.225483    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:25.225497    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:25.242187    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:25.242199    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:25.256853    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:25.256863    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:25.275501    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:25.275510    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:25.287466    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:25.287477    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:25.299376    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:25.299388    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:25.316524    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:25.316534    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:25.328414    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:25.328425    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:25.362589    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:25.362595    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:25.366933    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:25.366941    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:27.892786    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:32.895162    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:32.895507    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:32.931749    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:32.931888    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:32.949982    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:32.950091    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:32.963962    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:32.964055    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:32.980333    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:32.980413    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:32.990933    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:32.991009    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:33.001507    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:33.001586    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:33.011874    7819 logs.go:276] 0 containers: []
	W0520 03:41:33.011883    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:33.011941    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:33.022044    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:33.022060    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:33.022065    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:33.033905    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:33.033916    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:33.047884    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:33.047893    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:33.052237    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:33.052242    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:33.086651    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:33.086662    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:33.100686    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:33.100695    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:33.112179    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:33.112188    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:33.123570    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:33.123583    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:33.142041    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:33.142051    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:33.153800    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:33.153812    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:33.187590    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:33.187599    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:33.199151    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:33.199165    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:33.222150    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:33.222159    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:35.744382    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:40.748116    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:40.748534    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:40.788880    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:40.789023    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:40.810455    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:40.810570    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:40.826136    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:40.826209    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:40.838108    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:40.838181    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:40.848640    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:40.848700    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:40.863565    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:40.863636    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:40.873676    7819 logs.go:276] 0 containers: []
	W0520 03:41:40.873687    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:40.873745    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:40.884251    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:40.884266    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:40.884272    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:40.888665    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:40.888675    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:40.900468    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:40.900480    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:40.912867    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:40.912878    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:40.924389    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:40.924401    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:40.949181    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:40.949197    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:40.984708    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:40.984716    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:41.018800    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:41.018814    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:41.033213    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:41.033226    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:41.047232    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:41.047246    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:41.062548    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:41.062561    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:41.074707    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:41.074719    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:41.091586    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:41.091595    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:43.606748    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:48.611048    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:48.611429    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:48.647884    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:48.648010    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:48.666843    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:48.666931    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:48.682345    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:48.682421    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:48.694194    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:48.694257    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:48.708863    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:48.708936    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:48.719106    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:48.719172    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:48.729399    7819 logs.go:276] 0 containers: []
	W0520 03:41:48.729413    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:48.729477    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:48.739761    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:48.739777    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:48.739782    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:48.754654    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:48.754666    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:48.768696    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:48.768707    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:48.782231    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:48.782244    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:48.793975    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:48.793987    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:48.805489    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:48.805499    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:48.839505    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:48.839514    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:48.873291    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:48.873307    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:48.886121    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:48.886130    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:48.906119    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:48.906134    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:48.917849    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:48.917859    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:48.945330    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:48.945341    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:48.969880    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:48.969885    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:51.475735    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:41:56.478251    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:41:56.478504    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:41:56.501990    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:41:56.502084    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:41:56.515778    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:41:56.515848    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:41:56.527992    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:41:56.528060    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:41:56.538597    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:41:56.538666    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:41:56.549059    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:41:56.549125    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:41:56.559612    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:41:56.559671    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:41:56.569848    7819 logs.go:276] 0 containers: []
	W0520 03:41:56.569859    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:41:56.569914    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:41:56.580113    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:41:56.580127    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:41:56.580133    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:41:56.613394    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:41:56.613400    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:41:56.624514    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:41:56.624526    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:41:56.635661    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:41:56.635675    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:41:56.640291    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:41:56.640297    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:41:56.676483    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:41:56.676500    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:41:56.693616    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:41:56.693629    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:41:56.707255    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:41:56.707267    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:41:56.718705    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:41:56.718715    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:41:56.733944    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:41:56.733956    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:41:56.749124    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:41:56.749135    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:41:56.766645    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:41:56.766654    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:41:56.778192    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:41:56.778202    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:41:59.303594    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:04.306945    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:04.307193    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:04.327167    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:04.327259    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:04.340870    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:04.340938    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:04.352818    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:42:04.352882    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:04.363624    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:04.363691    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:04.374147    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:04.374215    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:04.384424    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:04.384486    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:04.395281    7819 logs.go:276] 0 containers: []
	W0520 03:42:04.395291    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:04.395343    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:04.405430    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:04.405445    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:04.405451    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:04.416932    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:04.416946    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:04.428233    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:04.428248    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:04.440981    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:04.440993    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:04.474634    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:04.474642    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:04.478605    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:04.478611    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:04.494489    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:04.494499    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:04.514287    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:04.514296    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:04.531979    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:04.531991    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:04.556068    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:04.556076    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:04.591016    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:04.591027    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:04.609527    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:04.609541    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:04.621061    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:04.621070    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:07.140774    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:12.143901    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:12.144370    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:12.185794    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:12.185929    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:12.207267    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:12.207372    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:12.222618    7819 logs.go:276] 2 containers: [c1019828fe84 e428a6827bf2]
	I0520 03:42:12.222695    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:12.235169    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:12.235237    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:12.246186    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:12.246251    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:12.258392    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:12.258448    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:12.270237    7819 logs.go:276] 0 containers: []
	W0520 03:42:12.270247    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:12.270297    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:12.280977    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:12.280992    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:12.280996    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:12.296616    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:12.296628    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:12.314886    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:12.314901    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:12.326304    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:12.326315    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:12.337373    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:12.337386    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:12.372080    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:12.372089    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:12.407441    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:12.407452    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:12.421986    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:12.421995    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:12.433936    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:12.433946    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:12.457903    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:12.457912    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:12.462035    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:12.462040    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:12.478737    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:12.478751    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:12.490747    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:12.490756    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:15.005780    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:20.008236    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:20.008299    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:20.019902    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:20.019973    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:20.032069    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:20.032125    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:20.046993    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:42:20.047061    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:20.059045    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:20.059111    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:20.071296    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:20.071346    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:20.082466    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:20.082515    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:20.093210    7819 logs.go:276] 0 containers: []
	W0520 03:42:20.093223    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:20.093274    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:20.104982    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:20.104996    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:20.105001    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:20.117697    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:20.117708    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:20.143025    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:20.143042    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:20.156469    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:20.156480    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:20.168348    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:20.168357    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:20.186687    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:20.186698    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:20.192992    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:20.193003    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:20.207775    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:20.207785    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:20.224216    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:20.224226    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:20.237593    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:20.237604    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:20.250669    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:20.250679    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:20.286950    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:20.286964    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:20.302574    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:42:20.302582    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:42:20.315018    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:42:20.315027    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:42:20.327676    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:20.327687    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:22.870166    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:27.873164    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:27.873600    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:27.923312    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:27.923445    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:27.941299    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:27.941387    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:27.956637    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:42:27.956710    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:27.968112    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:27.968156    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:27.978987    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:27.979042    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:27.990905    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:27.990974    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:28.002211    7819 logs.go:276] 0 containers: []
	W0520 03:42:28.002226    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:28.002297    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:28.014520    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:28.014541    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:28.014547    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:28.027720    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:28.027739    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:28.041504    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:28.041516    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:28.067540    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:28.067556    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:28.083661    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:28.083679    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:28.099282    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:28.099298    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:28.113040    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:28.113052    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:28.149599    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:28.149614    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:28.186405    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:28.186418    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:28.202834    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:28.202844    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:28.220988    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:28.220998    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:28.233555    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:28.233566    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:28.237640    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:42:28.237646    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:42:28.249096    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:42:28.249107    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:42:28.260856    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:28.260869    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:30.775074    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:35.775992    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:35.776465    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:35.815924    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:35.816049    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:35.838431    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:35.838525    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:35.854380    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:42:35.854447    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:35.867182    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:35.867258    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:35.878295    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:35.878353    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:35.889000    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:35.889064    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:35.899195    7819 logs.go:276] 0 containers: []
	W0520 03:42:35.899205    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:35.899266    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:35.910597    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:35.910615    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:35.910621    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:35.914987    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:35.914993    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:35.926257    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:42:35.926271    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:42:35.937814    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:42:35.937825    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:42:35.949785    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:35.949795    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:35.974958    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:35.974967    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:35.986554    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:35.986564    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:35.997935    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:35.997944    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:36.031039    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:36.031046    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:36.045477    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:36.045486    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:36.061797    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:36.061806    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:36.082094    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:36.082104    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:36.103610    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:36.103619    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:36.137852    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:36.137862    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:36.152251    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:36.152260    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:38.665853    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:43.668190    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:43.668273    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:43.679755    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:43.679829    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:43.690680    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:43.690757    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:43.703013    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:42:43.703078    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:43.718514    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:43.718560    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:43.731783    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:43.731843    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:43.742833    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:43.742902    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:43.754368    7819 logs.go:276] 0 containers: []
	W0520 03:42:43.754380    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:43.754425    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:43.765304    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:43.765320    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:43.765325    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:43.803325    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:42:43.803333    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:42:43.822248    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:43.822261    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:43.839702    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:43.839714    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:43.853097    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:43.853106    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:43.857293    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:43.857303    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:43.878114    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:43.878132    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:43.898068    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:43.898079    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:43.933817    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:43.933836    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:43.946452    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:43.946465    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:43.964349    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:43.964362    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:43.989899    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:43.989910    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:44.004775    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:44.004790    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:44.020375    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:42:44.020392    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:42:44.033906    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:44.033915    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:46.547639    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:51.550219    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:51.550474    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:51.572688    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:51.572798    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:51.588047    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:51.588118    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:51.602218    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:42:51.602295    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:51.613213    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:51.613284    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:51.623766    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:51.623827    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:51.634323    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:51.634382    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:51.643782    7819 logs.go:276] 0 containers: []
	W0520 03:42:51.643796    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:51.643848    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:51.658929    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:51.658946    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:51.658951    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:51.675696    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:51.675709    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:51.687429    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:51.687438    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:51.699402    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:51.699413    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:51.711011    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:51.711022    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:51.745693    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:51.745705    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:51.760184    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:51.760195    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:51.775869    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:51.775879    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:51.800044    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:51.800051    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:51.804396    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:42:51.804405    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:42:51.815966    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:51.815977    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:51.833895    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:51.833904    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:51.851817    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:42:51.851830    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:42:51.863506    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:51.863517    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:51.874660    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:51.874672    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:54.410136    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:42:59.412721    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:42:59.412898    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:42:59.428354    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:42:59.428430    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:42:59.440648    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:42:59.440714    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:42:59.451107    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:42:59.451176    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:42:59.461665    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:42:59.461729    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:42:59.472019    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:42:59.472087    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:42:59.482147    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:42:59.482206    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:42:59.492189    7819 logs.go:276] 0 containers: []
	W0520 03:42:59.492204    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:42:59.492258    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:42:59.502365    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:42:59.502383    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:42:59.502387    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:42:59.513899    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:42:59.513911    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:42:59.526452    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:42:59.526465    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:42:59.550981    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:42:59.550989    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:42:59.564804    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:42:59.564816    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:42:59.579120    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:42:59.579129    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:42:59.592279    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:42:59.592292    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:42:59.603790    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:42:59.603800    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:42:59.639410    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:42:59.639423    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:42:59.644110    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:42:59.644118    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:42:59.663902    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:42:59.663914    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:42:59.675328    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:42:59.675343    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:42:59.708930    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:42:59.708938    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:42:59.724312    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:42:59.724325    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:42:59.741743    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:42:59.741755    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:02.264127    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:43:07.266789    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:43:07.266857    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:43:07.279394    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:43:07.279456    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:43:07.291154    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:43:07.291219    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:43:07.307866    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:43:07.307925    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:43:07.318895    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:43:07.318958    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:43:07.330999    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:43:07.331052    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:43:07.343450    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:43:07.343505    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:43:07.353628    7819 logs.go:276] 0 containers: []
	W0520 03:43:07.353638    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:43:07.353701    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:43:07.365439    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:43:07.365456    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:43:07.365462    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:43:07.405142    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:43:07.405155    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:43:07.417021    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:43:07.417030    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:43:07.452087    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:43:07.452102    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:43:07.456692    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:43:07.456700    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:43:07.471721    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:43:07.471738    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:07.484031    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:43:07.484043    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:43:07.502701    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:43:07.502721    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:43:07.515445    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:43:07.515456    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:43:07.530718    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:43:07.530729    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:43:07.544066    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:43:07.544078    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:43:07.556892    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:43:07.556904    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:43:07.573494    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:43:07.573507    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:43:07.586402    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:43:07.586413    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:43:07.612093    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:43:07.612112    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:43:10.127841    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:43:15.130577    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:43:15.130855    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:43:15.157144    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:43:15.157251    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:43:15.174822    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:43:15.174885    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:43:15.188821    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:43:15.188892    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:43:15.200154    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:43:15.200219    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:43:15.210609    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:43:15.210664    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:43:15.221132    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:43:15.221205    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:43:15.233370    7819 logs.go:276] 0 containers: []
	W0520 03:43:15.233378    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:43:15.233426    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:43:15.249286    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:43:15.249304    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:43:15.249309    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:43:15.274036    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:43:15.274046    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:43:15.288728    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:43:15.288741    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:43:15.292929    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:43:15.292939    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:43:15.326425    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:43:15.326435    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:43:15.342961    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:43:15.342973    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:43:15.357749    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:43:15.357763    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:43:15.375371    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:43:15.375382    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:43:15.390063    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:43:15.390076    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:15.401725    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:43:15.401735    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:43:15.413115    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:43:15.413127    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:43:15.447718    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:43:15.447726    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:43:15.461777    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:43:15.461790    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:43:15.477959    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:43:15.477969    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:43:15.490024    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:43:15.490034    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:43:18.003569    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:43:23.006305    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:43:23.006782    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:43:23.048497    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:43:23.048634    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:43:23.070757    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:43:23.070866    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:43:23.089709    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:43:23.089784    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:43:23.101325    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:43:23.101387    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:43:23.112090    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:43:23.112162    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:43:23.123034    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:43:23.123096    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:43:23.133863    7819 logs.go:276] 0 containers: []
	W0520 03:43:23.133872    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:43:23.133920    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:43:23.144024    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:43:23.144040    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:43:23.144046    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:43:23.179179    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:43:23.179189    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:43:23.183218    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:43:23.183224    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:43:23.216994    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:43:23.217006    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:43:23.229423    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:43:23.229432    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:43:23.241957    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:43:23.241970    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:43:23.254653    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:43:23.254664    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:43:23.272076    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:43:23.272086    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:23.284108    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:43:23.284118    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:43:23.295784    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:43:23.295797    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:43:23.310303    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:43:23.310315    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:43:23.334143    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:43:23.334152    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:43:23.349685    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:43:23.349695    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:43:23.365012    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:43:23.365023    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:43:23.377142    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:43:23.377151    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:43:25.895476    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:43:30.898062    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:43:30.898149    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:43:30.914207    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:43:30.914265    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:43:30.925717    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:43:30.925786    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:43:30.939120    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:43:30.939184    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:43:30.951212    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:43:30.951296    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:43:30.963029    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:43:30.963105    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:43:30.975046    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:43:30.975108    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:43:30.986263    7819 logs.go:276] 0 containers: []
	W0520 03:43:30.986274    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:43:30.986338    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:43:30.997679    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:43:30.997699    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:43:30.997705    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:43:31.035536    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:43:31.035548    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:43:31.040119    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:43:31.040130    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:43:31.052396    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:43:31.052410    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:43:31.065687    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:43:31.065699    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:43:31.084311    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:43:31.084324    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:43:31.099826    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:43:31.099837    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:43:31.114896    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:43:31.114908    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:43:31.133521    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:43:31.133531    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:43:31.150162    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:43:31.150171    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:43:31.173919    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:43:31.173937    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:43:31.191605    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:43:31.191614    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:43:31.225378    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:43:31.225399    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:43:31.246845    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:43:31.246857    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:43:31.267210    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:43:31.267221    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:33.782225    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:43:38.783463    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:43:38.783920    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:43:38.824876    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:43:38.825006    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:43:38.846599    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:43:38.846703    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:43:38.862124    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:43:38.862192    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:43:38.874589    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:43:38.874651    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:43:38.885895    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:43:38.885971    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:43:38.896656    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:43:38.896717    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:43:38.915101    7819 logs.go:276] 0 containers: []
	W0520 03:43:38.915112    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:43:38.915170    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:43:38.925250    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:43:38.925267    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:43:38.925271    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:43:38.939184    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:43:38.939195    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:38.950485    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:43:38.950497    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:43:38.966019    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:43:38.966030    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:43:38.977810    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:43:38.977823    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:43:38.988809    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:43:38.988819    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:43:39.028104    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:43:39.028121    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:43:39.039894    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:43:39.039904    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:43:39.051993    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:43:39.052004    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:43:39.076938    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:43:39.076946    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:43:39.089366    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:43:39.089380    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:43:39.101747    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:43:39.101758    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:43:39.118939    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:43:39.118951    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:43:39.123073    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:43:39.123081    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:43:39.156090    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:43:39.156100    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:43:41.671999    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:43:46.674456    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:43:46.674913    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:43:46.711579    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:43:46.711715    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:43:46.733514    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:43:46.733624    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:43:46.748209    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:43:46.748287    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:43:46.760442    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:43:46.760517    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:43:46.771439    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:43:46.771503    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:43:46.786659    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:43:46.786725    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:43:46.796690    7819 logs.go:276] 0 containers: []
	W0520 03:43:46.796702    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:43:46.796757    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:43:46.806955    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:43:46.806972    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:43:46.806977    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:43:46.821168    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:43:46.821178    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:43:46.832369    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:43:46.832381    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:43:46.844425    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:43:46.844439    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:43:46.879990    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:43:46.879999    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:43:46.914678    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:43:46.914689    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:43:46.918973    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:43:46.918982    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:43:46.932818    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:43:46.932829    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:46.950973    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:43:46.950985    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:43:46.962368    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:43:46.962381    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:43:46.974008    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:43:46.974020    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:43:46.985866    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:43:46.985874    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:43:47.002534    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:43:47.002546    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:43:47.026851    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:43:47.026864    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:43:47.038746    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:43:47.038755    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:43:49.563932    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:43:54.566581    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:43:54.566965    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 03:43:54.601786    7819 logs.go:276] 1 containers: [994c2fd14c7f]
	I0520 03:43:54.601903    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 03:43:54.623366    7819 logs.go:276] 1 containers: [de9c4a28b076]
	I0520 03:43:54.623461    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 03:43:54.638337    7819 logs.go:276] 4 containers: [2ff59bda190d d7c52233221a c1019828fe84 e428a6827bf2]
	I0520 03:43:54.638402    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 03:43:54.654459    7819 logs.go:276] 1 containers: [ec8fff9fb486]
	I0520 03:43:54.654530    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 03:43:54.665962    7819 logs.go:276] 1 containers: [2056a3843a24]
	I0520 03:43:54.666023    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 03:43:54.676997    7819 logs.go:276] 1 containers: [e00dc47687ab]
	I0520 03:43:54.677056    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 03:43:54.687235    7819 logs.go:276] 0 containers: []
	W0520 03:43:54.687245    7819 logs.go:278] No container was found matching "kindnet"
	I0520 03:43:54.687300    7819 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 03:43:54.697740    7819 logs.go:276] 1 containers: [cb62a627d659]
	I0520 03:43:54.697757    7819 logs.go:123] Gathering logs for coredns [e428a6827bf2] ...
	I0520 03:43:54.697762    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e428a6827bf2"
	I0520 03:43:54.709445    7819 logs.go:123] Gathering logs for kube-scheduler [ec8fff9fb486] ...
	I0520 03:43:54.709458    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec8fff9fb486"
	I0520 03:43:54.727120    7819 logs.go:123] Gathering logs for coredns [c1019828fe84] ...
	I0520 03:43:54.727131    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1019828fe84"
	I0520 03:43:54.738890    7819 logs.go:123] Gathering logs for describe nodes ...
	I0520 03:43:54.738899    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 03:43:54.773959    7819 logs.go:123] Gathering logs for coredns [d7c52233221a] ...
	I0520 03:43:54.773972    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7c52233221a"
	I0520 03:43:54.786143    7819 logs.go:123] Gathering logs for etcd [de9c4a28b076] ...
	I0520 03:43:54.786153    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9c4a28b076"
	I0520 03:43:54.800505    7819 logs.go:123] Gathering logs for kube-proxy [2056a3843a24] ...
	I0520 03:43:54.800516    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2056a3843a24"
	I0520 03:43:54.812913    7819 logs.go:123] Gathering logs for Docker ...
	I0520 03:43:54.812923    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 03:43:54.836045    7819 logs.go:123] Gathering logs for container status ...
	I0520 03:43:54.836054    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 03:43:54.847651    7819 logs.go:123] Gathering logs for kubelet ...
	I0520 03:43:54.847661    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 03:43:54.880728    7819 logs.go:123] Gathering logs for dmesg ...
	I0520 03:43:54.880735    7819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 03:43:54.884983    7819 logs.go:123] Gathering logs for kube-controller-manager [e00dc47687ab] ...
	I0520 03:43:54.884988    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e00dc47687ab"
	I0520 03:43:54.910653    7819 logs.go:123] Gathering logs for storage-provisioner [cb62a627d659] ...
	I0520 03:43:54.910660    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb62a627d659"
	I0520 03:43:54.922839    7819 logs.go:123] Gathering logs for kube-apiserver [994c2fd14c7f] ...
	I0520 03:43:54.922848    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 994c2fd14c7f"
	I0520 03:43:54.944938    7819 logs.go:123] Gathering logs for coredns [2ff59bda190d] ...
	I0520 03:43:54.944950    7819 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff59bda190d"
	I0520 03:43:57.459653    7819 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0520 03:44:02.461856    7819 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 03:44:02.468018    7819 out.go:177] 
	W0520 03:44:02.471873    7819 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0520 03:44:02.471904    7819 out.go:239] * 
	* 
	W0520 03:44:02.474717    7819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:02.483898    7819 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-555000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.39s)

                                                
                                    
x
+
TestPause/serial/Start (9.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-665000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-665000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.805560542s)

                                                
                                                
-- stdout --
	* [pause-665000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-665000" primary control-plane node in "pause-665000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-665000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-665000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-665000 -n pause-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-665000 -n pause-665000: exit status 7 (65.730458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-727000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-727000 --driver=qemu2 : exit status 80 (9.705827208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-727000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-727000" primary control-plane node in "NoKubernetes-727000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-727000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-727000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000: exit status 7 (44.27875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232814875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-727000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-727000
	* Restarting existing qemu2 VM for "NoKubernetes-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-727000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000: exit status 7 (53.014083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --driver=qemu2 : exit status 80 (5.233280542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-727000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-727000
	* Restarting existing qemu2 VM for "NoKubernetes-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-727000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000: exit status 7 (64.375375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-727000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-727000 --driver=qemu2 : exit status 80 (5.254322125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-727000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-727000
	* Restarting existing qemu2 VM for "NoKubernetes-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-727000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-727000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-727000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-727000 -n NoKubernetes-727000: exit status 7 (53.732834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.727621542s)

                                                
                                                
-- stdout --
	* [auto-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-225000" primary control-plane node in "auto-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:42:08.671151    8073 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:42:08.671265    8073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:08.671268    8073 out.go:304] Setting ErrFile to fd 2...
	I0520 03:42:08.671271    8073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:08.671404    8073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:42:08.672466    8073 out.go:298] Setting JSON to false
	I0520 03:42:08.688702    8073 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6099,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:42:08.688800    8073 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:42:08.693100    8073 out.go:177] * [auto-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:42:08.701151    8073 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:42:08.701201    8073 notify.go:220] Checking for updates...
	I0520 03:42:08.708116    8073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:42:08.711164    8073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:42:08.714094    8073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:42:08.717112    8073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:42:08.720112    8073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:42:08.723473    8073 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:42:08.723545    8073 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:42:08.723607    8073 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:42:08.727044    8073 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:42:08.733036    8073 start.go:297] selected driver: qemu2
	I0520 03:42:08.733044    8073 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:42:08.733050    8073 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:42:08.735233    8073 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:42:08.738086    8073 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:42:08.741310    8073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:42:08.741330    8073 cni.go:84] Creating CNI manager for ""
	I0520 03:42:08.741337    8073 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:42:08.741341    8073 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:42:08.741380    8073 start.go:340] cluster config:
	{Name:auto-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:42:08.745741    8073 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:42:08.753086    8073 out.go:177] * Starting "auto-225000" primary control-plane node in "auto-225000" cluster
	I0520 03:42:08.757111    8073 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:42:08.757126    8073 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:42:08.757136    8073 cache.go:56] Caching tarball of preloaded images
	I0520 03:42:08.757189    8073 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:42:08.757196    8073 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:42:08.757250    8073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/auto-225000/config.json ...
	I0520 03:42:08.757261    8073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/auto-225000/config.json: {Name:mk148b87a202351107d8181e0e6d387ff72eb2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:42:08.757549    8073 start.go:360] acquireMachinesLock for auto-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:08.757580    8073 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "auto-225000"
	I0520 03:42:08.757592    8073 start.go:93] Provisioning new machine with config: &{Name:auto-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:08.757636    8073 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:08.766128    8073 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:08.782218    8073 start.go:159] libmachine.API.Create for "auto-225000" (driver="qemu2")
	I0520 03:42:08.782242    8073 client.go:168] LocalClient.Create starting
	I0520 03:42:08.782294    8073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:08.782322    8073 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:08.782331    8073 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:08.782369    8073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:08.782392    8073 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:08.782401    8073 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:08.782839    8073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:08.916893    8073 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:08.972938    8073 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:08.972944    8073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:08.973144    8073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2
	I0520 03:42:08.986086    8073 main.go:141] libmachine: STDOUT: 
	I0520 03:42:08.986104    8073 main.go:141] libmachine: STDERR: 
	I0520 03:42:08.986153    8073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2 +20000M
	I0520 03:42:08.997150    8073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:08.997170    8073 main.go:141] libmachine: STDERR: 
	I0520 03:42:08.997183    8073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2
	I0520 03:42:08.997187    8073 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:08.997227    8073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:93:a1:1c:36:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2
	I0520 03:42:08.998976    8073 main.go:141] libmachine: STDOUT: 
	I0520 03:42:08.998999    8073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:08.999021    8073 client.go:171] duration metric: took 216.760959ms to LocalClient.Create
	I0520 03:42:11.001215    8073 start.go:128] duration metric: took 2.243444s to createHost
	I0520 03:42:11.001244    8073 start.go:83] releasing machines lock for "auto-225000", held for 2.243533667s
	W0520 03:42:11.001281    8073 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:11.015930    8073 out.go:177] * Deleting "auto-225000" in qemu2 ...
	W0520 03:42:11.030337    8073 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:11.030345    8073 start.go:728] Will try again in 5 seconds ...
	I0520 03:42:16.030978    8073 start.go:360] acquireMachinesLock for auto-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:16.031534    8073 start.go:364] duration metric: took 476.875µs to acquireMachinesLock for "auto-225000"
	I0520 03:42:16.031652    8073 start.go:93] Provisioning new machine with config: &{Name:auto-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:16.031957    8073 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:16.040566    8073 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:16.088805    8073 start.go:159] libmachine.API.Create for "auto-225000" (driver="qemu2")
	I0520 03:42:16.088859    8073 client.go:168] LocalClient.Create starting
	I0520 03:42:16.088976    8073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:16.089049    8073 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:16.089073    8073 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:16.089137    8073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:16.089190    8073 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:16.089203    8073 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:16.089714    8073 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:16.233941    8073 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:16.303077    8073 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:16.303084    8073 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:16.303285    8073 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2
	I0520 03:42:16.316220    8073 main.go:141] libmachine: STDOUT: 
	I0520 03:42:16.316242    8073 main.go:141] libmachine: STDERR: 
	I0520 03:42:16.316310    8073 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2 +20000M
	I0520 03:42:16.328233    8073 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:16.328256    8073 main.go:141] libmachine: STDERR: 
	I0520 03:42:16.328285    8073 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2
	I0520 03:42:16.328289    8073 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:16.328334    8073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:76:1e:6a:89:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/auto-225000/disk.qcow2
	I0520 03:42:16.330263    8073 main.go:141] libmachine: STDOUT: 
	I0520 03:42:16.330281    8073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:16.330292    8073 client.go:171] duration metric: took 241.414834ms to LocalClient.Create
	I0520 03:42:18.332711    8073 start.go:128] duration metric: took 2.300622416s to createHost
	I0520 03:42:18.332815    8073 start.go:83] releasing machines lock for "auto-225000", held for 2.301191084s
	W0520 03:42:18.333220    8073 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:18.340909    8073 out.go:177] 
	W0520 03:42:18.347076    8073 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:42:18.347121    8073 out.go:239] * 
	* 
	W0520 03:42:18.349004    8073 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:42:18.357851    8073 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.757130666s)

                                                
                                                
-- stdout --
	* [kindnet-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-225000" primary control-plane node in "kindnet-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:42:20.544053    8186 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:42:20.544193    8186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:20.544196    8186 out.go:304] Setting ErrFile to fd 2...
	I0520 03:42:20.544198    8186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:20.544332    8186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:42:20.545462    8186 out.go:298] Setting JSON to false
	I0520 03:42:20.561597    8186 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6111,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:42:20.561655    8186 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:42:20.566044    8186 out.go:177] * [kindnet-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:42:20.573066    8186 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:42:20.576975    8186 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:42:20.573177    8186 notify.go:220] Checking for updates...
	I0520 03:42:20.582939    8186 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:42:20.586042    8186 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:42:20.588965    8186 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:42:20.591920    8186 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:42:20.595311    8186 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:42:20.595386    8186 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:42:20.595433    8186 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:42:20.599960    8186 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:42:20.607012    8186 start.go:297] selected driver: qemu2
	I0520 03:42:20.607022    8186 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:42:20.607040    8186 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:42:20.609281    8186 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:42:20.611983    8186 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:42:20.615030    8186 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:42:20.615045    8186 cni.go:84] Creating CNI manager for "kindnet"
	I0520 03:42:20.615048    8186 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 03:42:20.615094    8186 start.go:340] cluster config:
	{Name:kindnet-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:42:20.619361    8186 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:42:20.627013    8186 out.go:177] * Starting "kindnet-225000" primary control-plane node in "kindnet-225000" cluster
	I0520 03:42:20.630978    8186 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:42:20.630995    8186 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:42:20.631008    8186 cache.go:56] Caching tarball of preloaded images
	I0520 03:42:20.631087    8186 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:42:20.631093    8186 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:42:20.631154    8186 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/kindnet-225000/config.json ...
	I0520 03:42:20.631165    8186 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/kindnet-225000/config.json: {Name:mk0dd9d4af7011d3174f0591063681165cc5140d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:42:20.631382    8186 start.go:360] acquireMachinesLock for kindnet-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:20.631416    8186 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "kindnet-225000"
	I0520 03:42:20.631428    8186 start.go:93] Provisioning new machine with config: &{Name:kindnet-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:20.631455    8186 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:20.639969    8186 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:20.657227    8186 start.go:159] libmachine.API.Create for "kindnet-225000" (driver="qemu2")
	I0520 03:42:20.657258    8186 client.go:168] LocalClient.Create starting
	I0520 03:42:20.657316    8186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:20.657347    8186 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:20.657356    8186 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:20.657397    8186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:20.657419    8186 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:20.657429    8186 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:20.657775    8186 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:20.791545    8186 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:20.909157    8186 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:20.909167    8186 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:20.909366    8186 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2
	I0520 03:42:20.921830    8186 main.go:141] libmachine: STDOUT: 
	I0520 03:42:20.921852    8186 main.go:141] libmachine: STDERR: 
	I0520 03:42:20.921931    8186 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2 +20000M
	I0520 03:42:20.933294    8186 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:20.933317    8186 main.go:141] libmachine: STDERR: 
	I0520 03:42:20.933332    8186 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2
	I0520 03:42:20.933337    8186 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:20.933371    8186 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:67:73:2d:8e:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2
	I0520 03:42:20.935072    8186 main.go:141] libmachine: STDOUT: 
	I0520 03:42:20.935087    8186 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:20.935106    8186 client.go:171] duration metric: took 277.837458ms to LocalClient.Create
	I0520 03:42:22.937251    8186 start.go:128] duration metric: took 2.305742792s to createHost
	I0520 03:42:22.937299    8186 start.go:83] releasing machines lock for "kindnet-225000", held for 2.30582975s
	W0520 03:42:22.937352    8186 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:22.946940    8186 out.go:177] * Deleting "kindnet-225000" in qemu2 ...
	W0520 03:42:22.965746    8186 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:22.965759    8186 start.go:728] Will try again in 5 seconds ...
	I0520 03:42:27.967922    8186 start.go:360] acquireMachinesLock for kindnet-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:27.968037    8186 start.go:364] duration metric: took 85.708µs to acquireMachinesLock for "kindnet-225000"
	I0520 03:42:27.968052    8186 start.go:93] Provisioning new machine with config: &{Name:kindnet-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:27.968121    8186 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:27.972325    8186 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:27.988363    8186 start.go:159] libmachine.API.Create for "kindnet-225000" (driver="qemu2")
	I0520 03:42:27.988408    8186 client.go:168] LocalClient.Create starting
	I0520 03:42:27.988482    8186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:27.988518    8186 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:27.988527    8186 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:27.988567    8186 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:27.988589    8186 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:27.988597    8186 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:27.988943    8186 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:28.125258    8186 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:28.202602    8186 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:28.202612    8186 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:28.202839    8186 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2
	I0520 03:42:28.217264    8186 main.go:141] libmachine: STDOUT: 
	I0520 03:42:28.217289    8186 main.go:141] libmachine: STDERR: 
	I0520 03:42:28.217350    8186 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2 +20000M
	I0520 03:42:28.229852    8186 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:28.229884    8186 main.go:141] libmachine: STDERR: 
	I0520 03:42:28.229907    8186 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2
	I0520 03:42:28.229913    8186 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:28.229956    8186 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d0:d6:3f:20:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kindnet-225000/disk.qcow2
	I0520 03:42:28.232036    8186 main.go:141] libmachine: STDOUT: 
	I0520 03:42:28.232060    8186 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:28.232076    8186 client.go:171] duration metric: took 243.662917ms to LocalClient.Create
	I0520 03:42:30.234277    8186 start.go:128] duration metric: took 2.266117417s to createHost
	I0520 03:42:30.234344    8186 start.go:83] releasing machines lock for "kindnet-225000", held for 2.2662865s
	W0520 03:42:30.234746    8186 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:30.245297    8186 out.go:177] 
	W0520 03:42:30.249444    8186 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:42:30.249469    8186 out.go:239] * 
	* 
	W0520 03:42:30.251432    8186 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:42:30.261302    8186 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.906471084s)

                                                
                                                
-- stdout --
	* [calico-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-225000" primary control-plane node in "calico-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:42:32.533877    8301 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:42:32.534013    8301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:32.534016    8301 out.go:304] Setting ErrFile to fd 2...
	I0520 03:42:32.534019    8301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:32.534152    8301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:42:32.535297    8301 out.go:298] Setting JSON to false
	I0520 03:42:32.552188    8301 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6123,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:42:32.552268    8301 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:42:32.556827    8301 out.go:177] * [calico-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:42:32.564778    8301 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:42:32.569762    8301 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:42:32.564841    8301 notify.go:220] Checking for updates...
	I0520 03:42:32.573667    8301 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:42:32.576790    8301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:42:32.579778    8301 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:42:32.582732    8301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:42:32.586036    8301 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:42:32.586114    8301 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:42:32.586151    8301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:42:32.590743    8301 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:42:32.597797    8301 start.go:297] selected driver: qemu2
	I0520 03:42:32.597805    8301 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:42:32.597814    8301 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:42:32.600023    8301 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:42:32.602757    8301 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:42:32.612859    8301 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:42:32.612882    8301 cni.go:84] Creating CNI manager for "calico"
	I0520 03:42:32.612887    8301 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0520 03:42:32.612928    8301 start.go:340] cluster config:
	{Name:calico-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:calico-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:42:32.617545    8301 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:42:32.624704    8301 out.go:177] * Starting "calico-225000" primary control-plane node in "calico-225000" cluster
	I0520 03:42:32.628695    8301 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:42:32.628709    8301 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:42:32.628720    8301 cache.go:56] Caching tarball of preloaded images
	I0520 03:42:32.628781    8301 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:42:32.628787    8301 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:42:32.628856    8301 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/calico-225000/config.json ...
	I0520 03:42:32.628867    8301 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/calico-225000/config.json: {Name:mkd652282fd15409c68a98d016ac0b7bdcb365a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:42:32.629297    8301 start.go:360] acquireMachinesLock for calico-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:32.629334    8301 start.go:364] duration metric: took 31.458µs to acquireMachinesLock for "calico-225000"
	I0520 03:42:32.629346    8301 start.go:93] Provisioning new machine with config: &{Name:calico-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:32.629372    8301 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:32.637728    8301 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:32.654830    8301 start.go:159] libmachine.API.Create for "calico-225000" (driver="qemu2")
	I0520 03:42:32.654860    8301 client.go:168] LocalClient.Create starting
	I0520 03:42:32.654924    8301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:32.654955    8301 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:32.654966    8301 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:32.655006    8301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:32.655030    8301 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:32.655039    8301 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:32.655448    8301 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:32.812445    8301 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:32.878479    8301 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:32.878485    8301 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:32.878667    8301 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2
	I0520 03:42:32.891242    8301 main.go:141] libmachine: STDOUT: 
	I0520 03:42:32.891267    8301 main.go:141] libmachine: STDERR: 
	I0520 03:42:32.891326    8301 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2 +20000M
	I0520 03:42:32.902788    8301 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:32.902806    8301 main.go:141] libmachine: STDERR: 
	I0520 03:42:32.902827    8301 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2
	I0520 03:42:32.902832    8301 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:32.902865    8301 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f8:8a:f2:3f:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2
	I0520 03:42:32.904657    8301 main.go:141] libmachine: STDOUT: 
	I0520 03:42:32.904673    8301 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:32.904701    8301 client.go:171] duration metric: took 249.825458ms to LocalClient.Create
	I0520 03:42:34.906912    8301 start.go:128] duration metric: took 2.277507459s to createHost
	I0520 03:42:34.907003    8301 start.go:83] releasing machines lock for "calico-225000", held for 2.277661334s
	W0520 03:42:34.907098    8301 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:34.919181    8301 out.go:177] * Deleting "calico-225000" in qemu2 ...
	W0520 03:42:34.938270    8301 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:34.938296    8301 start.go:728] Will try again in 5 seconds ...
	I0520 03:42:39.939305    8301 start.go:360] acquireMachinesLock for calico-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:39.939832    8301 start.go:364] duration metric: took 423.625µs to acquireMachinesLock for "calico-225000"
	I0520 03:42:39.939901    8301 start.go:93] Provisioning new machine with config: &{Name:calico-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:39.940157    8301 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:39.948733    8301 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:39.997205    8301 start.go:159] libmachine.API.Create for "calico-225000" (driver="qemu2")
	I0520 03:42:39.997262    8301 client.go:168] LocalClient.Create starting
	I0520 03:42:39.997400    8301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:39.997461    8301 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:39.997492    8301 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:39.997555    8301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:39.997603    8301 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:39.997619    8301 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:39.998172    8301 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:40.143477    8301 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:40.346147    8301 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:40.346155    8301 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:40.346381    8301 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2
	I0520 03:42:40.359398    8301 main.go:141] libmachine: STDOUT: 
	I0520 03:42:40.359429    8301 main.go:141] libmachine: STDERR: 
	I0520 03:42:40.359497    8301 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2 +20000M
	I0520 03:42:40.370841    8301 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:40.370869    8301 main.go:141] libmachine: STDERR: 
	I0520 03:42:40.370888    8301 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2
	I0520 03:42:40.370894    8301 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:40.370926    8301 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:d2:f1:18:c9:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/calico-225000/disk.qcow2
	I0520 03:42:40.372654    8301 main.go:141] libmachine: STDOUT: 
	I0520 03:42:40.372686    8301 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:40.372698    8301 client.go:171] duration metric: took 375.431958ms to LocalClient.Create
	I0520 03:42:42.375026    8301 start.go:128] duration metric: took 2.434819167s to createHost
	I0520 03:42:42.375143    8301 start.go:83] releasing machines lock for "calico-225000", held for 2.4353065s
	W0520 03:42:42.375505    8301 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:42.383115    8301 out.go:177] 
	W0520 03:42:42.387120    8301 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:42:42.387147    8301 out.go:239] * 
	* 
	W0520 03:42:42.389929    8301 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:42:42.398039    8301 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.793722916s)

                                                
                                                
-- stdout --
	* [custom-flannel-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-225000" primary control-plane node in "custom-flannel-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:42:44.844708    8419 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:42:44.844832    8419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:44.844836    8419 out.go:304] Setting ErrFile to fd 2...
	I0520 03:42:44.844838    8419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:44.844991    8419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:42:44.846078    8419 out.go:298] Setting JSON to false
	I0520 03:42:44.862811    8419 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6135,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:42:44.862880    8419 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:42:44.868768    8419 out.go:177] * [custom-flannel-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:42:44.881744    8419 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:42:44.878726    8419 notify.go:220] Checking for updates...
	I0520 03:42:44.887705    8419 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:42:44.890711    8419 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:42:44.892049    8419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:42:44.894675    8419 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:42:44.897719    8419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:42:44.900917    8419 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:42:44.900982    8419 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:42:44.901037    8419 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:42:44.905723    8419 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:42:44.912609    8419 start.go:297] selected driver: qemu2
	I0520 03:42:44.912621    8419 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:42:44.912628    8419 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:42:44.914815    8419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:42:44.917696    8419 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:42:44.920788    8419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:42:44.920804    8419 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0520 03:42:44.920812    8419 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0520 03:42:44.920841    8419 start.go:340] cluster config:
	{Name:custom-flannel-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:42:44.925119    8419 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:42:44.932701    8419 out.go:177] * Starting "custom-flannel-225000" primary control-plane node in "custom-flannel-225000" cluster
	I0520 03:42:44.936720    8419 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:42:44.936751    8419 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:42:44.936766    8419 cache.go:56] Caching tarball of preloaded images
	I0520 03:42:44.936829    8419 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:42:44.936835    8419 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:42:44.936895    8419 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/custom-flannel-225000/config.json ...
	I0520 03:42:44.936910    8419 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/custom-flannel-225000/config.json: {Name:mkc3b4b1009917877b1d2ae645d34a27404a5aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:42:44.937207    8419 start.go:360] acquireMachinesLock for custom-flannel-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:44.937240    8419 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "custom-flannel-225000"
	I0520 03:42:44.937251    8419 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:44.937281    8419 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:44.944706    8419 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:44.959395    8419 start.go:159] libmachine.API.Create for "custom-flannel-225000" (driver="qemu2")
	I0520 03:42:44.959425    8419 client.go:168] LocalClient.Create starting
	I0520 03:42:44.959484    8419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:44.959516    8419 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:44.959528    8419 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:44.959563    8419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:44.959585    8419 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:44.959594    8419 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:44.959918    8419 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:45.094361    8419 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:45.195138    8419 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:45.195144    8419 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:45.195349    8419 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2
	I0520 03:42:45.207656    8419 main.go:141] libmachine: STDOUT: 
	I0520 03:42:45.207679    8419 main.go:141] libmachine: STDERR: 
	I0520 03:42:45.207739    8419 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2 +20000M
	I0520 03:42:45.218531    8419 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:45.218549    8419 main.go:141] libmachine: STDERR: 
	I0520 03:42:45.218570    8419 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2
	I0520 03:42:45.218576    8419 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:45.218614    8419 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:20:01:a4:38:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2
	I0520 03:42:45.220356    8419 main.go:141] libmachine: STDOUT: 
	I0520 03:42:45.220372    8419 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:45.220404    8419 client.go:171] duration metric: took 260.963333ms to LocalClient.Create
	I0520 03:42:47.222661    8419 start.go:128] duration metric: took 2.285378292s to createHost
	I0520 03:42:47.222725    8419 start.go:83] releasing machines lock for "custom-flannel-225000", held for 2.285500041s
	W0520 03:42:47.222780    8419 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:47.228568    8419 out.go:177] * Deleting "custom-flannel-225000" in qemu2 ...
	W0520 03:42:47.250627    8419 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:47.250651    8419 start.go:728] Will try again in 5 seconds ...
	I0520 03:42:52.250954    8419 start.go:360] acquireMachinesLock for custom-flannel-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:52.251554    8419 start.go:364] duration metric: took 480.333µs to acquireMachinesLock for "custom-flannel-225000"
	I0520 03:42:52.251631    8419 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:52.251938    8419 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:52.261401    8419 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:52.312140    8419 start.go:159] libmachine.API.Create for "custom-flannel-225000" (driver="qemu2")
	I0520 03:42:52.312202    8419 client.go:168] LocalClient.Create starting
	I0520 03:42:52.312324    8419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:52.312384    8419 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:52.312402    8419 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:52.312468    8419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:52.312511    8419 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:52.312524    8419 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:52.313022    8419 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:52.458179    8419 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:52.538654    8419 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:52.538660    8419 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:52.538866    8419 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2
	I0520 03:42:52.551246    8419 main.go:141] libmachine: STDOUT: 
	I0520 03:42:52.551269    8419 main.go:141] libmachine: STDERR: 
	I0520 03:42:52.551326    8419 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2 +20000M
	I0520 03:42:52.562858    8419 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:52.562888    8419 main.go:141] libmachine: STDERR: 
	I0520 03:42:52.562901    8419 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2
	I0520 03:42:52.562904    8419 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:52.562938    8419 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:45:24:d8:3c:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/custom-flannel-225000/disk.qcow2
	I0520 03:42:52.564744    8419 main.go:141] libmachine: STDOUT: 
	I0520 03:42:52.564763    8419 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:52.564773    8419 client.go:171] duration metric: took 252.569584ms to LocalClient.Create
	I0520 03:42:54.566938    8419 start.go:128] duration metric: took 2.314995834s to createHost
	I0520 03:42:54.567046    8419 start.go:83] releasing machines lock for "custom-flannel-225000", held for 2.315487292s
	W0520 03:42:54.567545    8419 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:54.581449    8419 out.go:177] 
	W0520 03:42:54.585071    8419 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:42:54.585125    8419 out.go:239] * 
	* 
	W0520 03:42:54.586875    8419 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:42:54.597025    8419 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.961697s)

                                                
                                                
-- stdout --
	* [false-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-225000" primary control-plane node in "false-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:42:56.969044    8537 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:42:56.969193    8537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:56.969196    8537 out.go:304] Setting ErrFile to fd 2...
	I0520 03:42:56.969199    8537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:42:56.969322    8537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:42:56.970437    8537 out.go:298] Setting JSON to false
	I0520 03:42:56.987210    8537 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6147,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:42:56.987277    8537 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:42:56.990929    8537 out.go:177] * [false-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:42:56.999010    8537 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:42:56.999069    8537 notify.go:220] Checking for updates...
	I0520 03:42:57.001902    8537 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:42:57.004903    8537 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:42:57.007964    8537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:42:57.010882    8537 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:42:57.013943    8537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:42:57.017301    8537 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:42:57.017370    8537 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:42:57.017405    8537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:42:57.020973    8537 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:42:57.027980    8537 start.go:297] selected driver: qemu2
	I0520 03:42:57.027990    8537 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:42:57.027996    8537 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:42:57.030123    8537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:42:57.031635    8537 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:42:57.034994    8537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:42:57.035008    8537 cni.go:84] Creating CNI manager for "false"
	I0520 03:42:57.035035    8537 start.go:340] cluster config:
	{Name:false-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:false-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:42:57.039343    8537 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:42:57.046881    8537 out.go:177] * Starting "false-225000" primary control-plane node in "false-225000" cluster
	I0520 03:42:57.050953    8537 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:42:57.050968    8537 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:42:57.050980    8537 cache.go:56] Caching tarball of preloaded images
	I0520 03:42:57.051051    8537 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:42:57.051057    8537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:42:57.051116    8537 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/false-225000/config.json ...
	I0520 03:42:57.051127    8537 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/false-225000/config.json: {Name:mk2c0862451ceed769f68c8c9772deaae496f7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:42:57.051438    8537 start.go:360] acquireMachinesLock for false-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:42:57.051469    8537 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "false-225000"
	I0520 03:42:57.051480    8537 start.go:93] Provisioning new machine with config: &{Name:false-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:42:57.051503    8537 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:42:57.055976    8537 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:42:57.071461    8537 start.go:159] libmachine.API.Create for "false-225000" (driver="qemu2")
	I0520 03:42:57.071484    8537 client.go:168] LocalClient.Create starting
	I0520 03:42:57.071574    8537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:42:57.071605    8537 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:57.071614    8537 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:57.071652    8537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:42:57.071674    8537 main.go:141] libmachine: Decoding PEM data...
	I0520 03:42:57.071681    8537 main.go:141] libmachine: Parsing certificate...
	I0520 03:42:57.072081    8537 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:42:57.204653    8537 main.go:141] libmachine: Creating SSH key...
	I0520 03:42:57.538797    8537 main.go:141] libmachine: Creating Disk image...
	I0520 03:42:57.538807    8537 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:42:57.539048    8537 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2
	I0520 03:42:57.552087    8537 main.go:141] libmachine: STDOUT: 
	I0520 03:42:57.552109    8537 main.go:141] libmachine: STDERR: 
	I0520 03:42:57.552176    8537 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2 +20000M
	I0520 03:42:57.563418    8537 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:42:57.563433    8537 main.go:141] libmachine: STDERR: 
	I0520 03:42:57.563459    8537 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2
	I0520 03:42:57.563463    8537 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:42:57.563491    8537 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:fd:90:30:68:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2
	I0520 03:42:57.565219    8537 main.go:141] libmachine: STDOUT: 
	I0520 03:42:57.565233    8537 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:42:57.565252    8537 client.go:171] duration metric: took 493.769167ms to LocalClient.Create
	I0520 03:42:59.565357    8537 start.go:128] duration metric: took 2.513881292s to createHost
	I0520 03:42:59.565367    8537 start.go:83] releasing machines lock for "false-225000", held for 2.513926584s
	W0520 03:42:59.565388    8537 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:59.574388    8537 out.go:177] * Deleting "false-225000" in qemu2 ...
	W0520 03:42:59.583568    8537 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:42:59.583580    8537 start.go:728] Will try again in 5 seconds ...
	I0520 03:43:04.585726    8537 start.go:360] acquireMachinesLock for false-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:04.586103    8537 start.go:364] duration metric: took 321.875µs to acquireMachinesLock for "false-225000"
	I0520 03:43:04.586151    8537 start.go:93] Provisioning new machine with config: &{Name:false-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:04.586364    8537 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:04.590747    8537 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:04.628717    8537 start.go:159] libmachine.API.Create for "false-225000" (driver="qemu2")
	I0520 03:43:04.628772    8537 client.go:168] LocalClient.Create starting
	I0520 03:43:04.628875    8537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:04.628938    8537 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:04.628951    8537 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:04.629007    8537 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:04.629047    8537 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:04.629060    8537 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:04.629570    8537 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:04.770486    8537 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:04.833514    8537 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:04.833526    8537 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:04.833770    8537 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2
	I0520 03:43:04.846651    8537 main.go:141] libmachine: STDOUT: 
	I0520 03:43:04.846682    8537 main.go:141] libmachine: STDERR: 
	I0520 03:43:04.846752    8537 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2 +20000M
	I0520 03:43:04.857957    8537 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:04.857990    8537 main.go:141] libmachine: STDERR: 
	I0520 03:43:04.858007    8537 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2
	I0520 03:43:04.858024    8537 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:04.858060    8537 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:dd:29:93:b1:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/false-225000/disk.qcow2
	I0520 03:43:04.859809    8537 main.go:141] libmachine: STDOUT: 
	I0520 03:43:04.859824    8537 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:04.859839    8537 client.go:171] duration metric: took 231.064875ms to LocalClient.Create
	I0520 03:43:06.862035    8537 start.go:128] duration metric: took 2.275635375s to createHost
	I0520 03:43:06.862109    8537 start.go:83] releasing machines lock for "false-225000", held for 2.276018959s
	W0520 03:43:06.862584    8537 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:06.872327    8537 out.go:177] 
	W0520 03:43:06.878352    8537 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:43:06.878388    8537 out.go:239] * 
	* 
	W0520 03:43:06.880972    8537 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:43:06.890278    8537 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.777513625s)

                                                
                                                
-- stdout --
	* [enable-default-cni-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-225000" primary control-plane node in "enable-default-cni-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:43:09.100354    8649 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:43:09.100480    8649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:09.100485    8649 out.go:304] Setting ErrFile to fd 2...
	I0520 03:43:09.100487    8649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:09.100623    8649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:43:09.101771    8649 out.go:298] Setting JSON to false
	I0520 03:43:09.118257    8649 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6160,"bootTime":1716195629,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:43:09.118331    8649 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:43:09.123317    8649 out.go:177] * [enable-default-cni-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:43:09.130246    8649 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:43:09.135177    8649 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:43:09.130302    8649 notify.go:220] Checking for updates...
	I0520 03:43:09.141228    8649 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:43:09.149214    8649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:43:09.152258    8649 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:43:09.155252    8649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:43:09.158488    8649 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:43:09.158562    8649 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:43:09.158608    8649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:43:09.163117    8649 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:43:09.170177    8649 start.go:297] selected driver: qemu2
	I0520 03:43:09.170183    8649 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:43:09.170189    8649 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:43:09.172468    8649 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:43:09.175150    8649 out.go:177] * Automatically selected the socket_vmnet network
	E0520 03:43:09.178293    8649 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0520 03:43:09.178310    8649 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:43:09.178325    8649 cni.go:84] Creating CNI manager for "bridge"
	I0520 03:43:09.178330    8649 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:43:09.178382    8649 start.go:340] cluster config:
	{Name:enable-default-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:43:09.183050    8649 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:43:09.188180    8649 out.go:177] * Starting "enable-default-cni-225000" primary control-plane node in "enable-default-cni-225000" cluster
	I0520 03:43:09.192193    8649 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:43:09.192209    8649 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:43:09.192233    8649 cache.go:56] Caching tarball of preloaded images
	I0520 03:43:09.192290    8649 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:43:09.192297    8649 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:43:09.192360    8649 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/enable-default-cni-225000/config.json ...
	I0520 03:43:09.192372    8649 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/enable-default-cni-225000/config.json: {Name:mk872b7ccff48df7c23f90e562b06ee09c84a79d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:43:09.192572    8649 start.go:360] acquireMachinesLock for enable-default-cni-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:09.192603    8649 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "enable-default-cni-225000"
	I0520 03:43:09.192614    8649 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:09.192639    8649 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:09.200214    8649 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:09.215240    8649 start.go:159] libmachine.API.Create for "enable-default-cni-225000" (driver="qemu2")
	I0520 03:43:09.215267    8649 client.go:168] LocalClient.Create starting
	I0520 03:43:09.215333    8649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:09.215364    8649 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:09.215373    8649 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:09.215418    8649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:09.215439    8649 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:09.215446    8649 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:09.215775    8649 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:09.351300    8649 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:09.440235    8649 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:09.440241    8649 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:09.440435    8649 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2
	I0520 03:43:09.453732    8649 main.go:141] libmachine: STDOUT: 
	I0520 03:43:09.453746    8649 main.go:141] libmachine: STDERR: 
	I0520 03:43:09.453813    8649 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2 +20000M
	I0520 03:43:09.465425    8649 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:09.465440    8649 main.go:141] libmachine: STDERR: 
	I0520 03:43:09.465472    8649 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2
	I0520 03:43:09.465477    8649 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:09.465515    8649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ef:5b:75:83:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2
	I0520 03:43:09.467393    8649 main.go:141] libmachine: STDOUT: 
	I0520 03:43:09.467408    8649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:09.467431    8649 client.go:171] duration metric: took 252.162333ms to LocalClient.Create
	I0520 03:43:11.469531    8649 start.go:128] duration metric: took 2.276913042s to createHost
	I0520 03:43:11.469564    8649 start.go:83] releasing machines lock for "enable-default-cni-225000", held for 2.276987625s
	W0520 03:43:11.469596    8649 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:11.474936    8649 out.go:177] * Deleting "enable-default-cni-225000" in qemu2 ...
	W0520 03:43:11.495309    8649 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:11.495328    8649 start.go:728] Will try again in 5 seconds ...
	I0520 03:43:16.497412    8649 start.go:360] acquireMachinesLock for enable-default-cni-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:16.497666    8649 start.go:364] duration metric: took 169.916µs to acquireMachinesLock for "enable-default-cni-225000"
	I0520 03:43:16.497727    8649 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:16.497841    8649 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:16.507284    8649 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:16.535736    8649 start.go:159] libmachine.API.Create for "enable-default-cni-225000" (driver="qemu2")
	I0520 03:43:16.535770    8649 client.go:168] LocalClient.Create starting
	I0520 03:43:16.535906    8649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:16.535966    8649 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:16.535983    8649 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:16.536033    8649 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:16.536068    8649 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:16.536078    8649 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:16.536492    8649 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:16.673869    8649 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:16.782708    8649 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:16.782714    8649 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:16.782923    8649 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2
	I0520 03:43:16.795721    8649 main.go:141] libmachine: STDOUT: 
	I0520 03:43:16.795746    8649 main.go:141] libmachine: STDERR: 
	I0520 03:43:16.795799    8649 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2 +20000M
	I0520 03:43:16.807495    8649 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:16.807518    8649 main.go:141] libmachine: STDERR: 
	I0520 03:43:16.807532    8649 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2
	I0520 03:43:16.807538    8649 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:16.807579    8649 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:43:0d:f9:37:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/enable-default-cni-225000/disk.qcow2
	I0520 03:43:16.809594    8649 main.go:141] libmachine: STDOUT: 
	I0520 03:43:16.809610    8649 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:16.809622    8649 client.go:171] duration metric: took 273.85225ms to LocalClient.Create
	I0520 03:43:18.811971    8649 start.go:128] duration metric: took 2.3140975s to createHost
	I0520 03:43:18.812102    8649 start.go:83] releasing machines lock for "enable-default-cni-225000", held for 2.314453958s
	W0520 03:43:18.812418    8649 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:18.822060    8649 out.go:177] 
	W0520 03:43:18.826228    8649 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:43:18.826288    8649 out.go:239] * 
	* 
	W0520 03:43:18.828222    8649 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:43:18.838004    8649 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.782839917s)

                                                
                                                
-- stdout --
	* [flannel-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-225000" primary control-plane node in "flannel-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:43:21.023173    8762 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:43:21.023321    8762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:21.023324    8762 out.go:304] Setting ErrFile to fd 2...
	I0520 03:43:21.023326    8762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:21.023446    8762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:43:21.024529    8762 out.go:298] Setting JSON to false
	I0520 03:43:21.041241    8762 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6172,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:43:21.041312    8762 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:43:21.045517    8762 out.go:177] * [flannel-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:43:21.053681    8762 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:43:21.056684    8762 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:43:21.053755    8762 notify.go:220] Checking for updates...
	I0520 03:43:21.059633    8762 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:43:21.062703    8762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:43:21.064126    8762 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:43:21.067689    8762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:43:21.071016    8762 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:43:21.071082    8762 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:43:21.071126    8762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:43:21.075429    8762 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:43:21.082621    8762 start.go:297] selected driver: qemu2
	I0520 03:43:21.082627    8762 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:43:21.082633    8762 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:43:21.084739    8762 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:43:21.087723    8762 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:43:21.090701    8762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:43:21.090718    8762 cni.go:84] Creating CNI manager for "flannel"
	I0520 03:43:21.090721    8762 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0520 03:43:21.090749    8762 start.go:340] cluster config:
	{Name:flannel-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:flannel-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:43:21.094847    8762 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:43:21.100618    8762 out.go:177] * Starting "flannel-225000" primary control-plane node in "flannel-225000" cluster
	I0520 03:43:21.104573    8762 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:43:21.104587    8762 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:43:21.104597    8762 cache.go:56] Caching tarball of preloaded images
	I0520 03:43:21.104645    8762 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:43:21.104651    8762 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:43:21.104697    8762 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/flannel-225000/config.json ...
	I0520 03:43:21.104708    8762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/flannel-225000/config.json: {Name:mkb2fe51598e2d34f6a1da30c2727f1359999853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:43:21.104906    8762 start.go:360] acquireMachinesLock for flannel-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:21.104937    8762 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "flannel-225000"
	I0520 03:43:21.104948    8762 start.go:93] Provisioning new machine with config: &{Name:flannel-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:21.104975    8762 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:21.112634    8762 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:21.128059    8762 start.go:159] libmachine.API.Create for "flannel-225000" (driver="qemu2")
	I0520 03:43:21.128086    8762 client.go:168] LocalClient.Create starting
	I0520 03:43:21.128148    8762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:21.128180    8762 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:21.128194    8762 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:21.128232    8762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:21.128258    8762 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:21.128265    8762 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:21.128616    8762 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:21.263603    8762 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:21.314251    8762 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:21.314256    8762 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:21.314431    8762 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2
	I0520 03:43:21.327454    8762 main.go:141] libmachine: STDOUT: 
	I0520 03:43:21.327475    8762 main.go:141] libmachine: STDERR: 
	I0520 03:43:21.327529    8762 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2 +20000M
	I0520 03:43:21.338936    8762 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:21.338971    8762 main.go:141] libmachine: STDERR: 
	I0520 03:43:21.338985    8762 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2
	I0520 03:43:21.338990    8762 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:21.339040    8762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d3:ac:24:46:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2
	I0520 03:43:21.341053    8762 main.go:141] libmachine: STDOUT: 
	I0520 03:43:21.341069    8762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:21.341091    8762 client.go:171] duration metric: took 213.002875ms to LocalClient.Create
	I0520 03:43:23.343160    8762 start.go:128] duration metric: took 2.238200292s to createHost
	I0520 03:43:23.343177    8762 start.go:83] releasing machines lock for "flannel-225000", held for 2.238270584s
	W0520 03:43:23.343198    8762 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:23.351716    8762 out.go:177] * Deleting "flannel-225000" in qemu2 ...
	W0520 03:43:23.360679    8762 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:23.360691    8762 start.go:728] Will try again in 5 seconds ...
	I0520 03:43:28.362849    8762 start.go:360] acquireMachinesLock for flannel-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:28.363491    8762 start.go:364] duration metric: took 464.209µs to acquireMachinesLock for "flannel-225000"
	I0520 03:43:28.363634    8762 start.go:93] Provisioning new machine with config: &{Name:flannel-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:28.363855    8762 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:28.372502    8762 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:28.424751    8762 start.go:159] libmachine.API.Create for "flannel-225000" (driver="qemu2")
	I0520 03:43:28.424808    8762 client.go:168] LocalClient.Create starting
	I0520 03:43:28.424933    8762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:28.425001    8762 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:28.425017    8762 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:28.425086    8762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:28.425143    8762 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:28.425156    8762 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:28.425719    8762 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:28.570499    8762 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:28.704923    8762 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:28.704933    8762 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:28.705118    8762 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2
	I0520 03:43:28.717910    8762 main.go:141] libmachine: STDOUT: 
	I0520 03:43:28.717927    8762 main.go:141] libmachine: STDERR: 
	I0520 03:43:28.717990    8762 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2 +20000M
	I0520 03:43:28.728947    8762 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:28.728967    8762 main.go:141] libmachine: STDERR: 
	I0520 03:43:28.728986    8762 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2
	I0520 03:43:28.728992    8762 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:28.729034    8762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:eb:6a:e4:eb:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/flannel-225000/disk.qcow2
	I0520 03:43:28.730872    8762 main.go:141] libmachine: STDOUT: 
	I0520 03:43:28.730888    8762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:28.730900    8762 client.go:171] duration metric: took 306.092375ms to LocalClient.Create
	I0520 03:43:30.733087    8762 start.go:128] duration metric: took 2.369236s to createHost
	I0520 03:43:30.733173    8762 start.go:83] releasing machines lock for "flannel-225000", held for 2.3696955s
	W0520 03:43:30.733560    8762 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:30.746460    8762 out.go:177] 
	W0520 03:43:30.750309    8762 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:43:30.750336    8762 out.go:239] * 
	* 
	W0520 03:43:30.752840    8762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:43:30.764294    8762 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.95719s)

                                                
                                                
-- stdout --
	* [bridge-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-225000" primary control-plane node in "bridge-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:43:33.166669    8881 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:43:33.166801    8881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:33.166804    8881 out.go:304] Setting ErrFile to fd 2...
	I0520 03:43:33.166806    8881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:33.166967    8881 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:43:33.168051    8881 out.go:298] Setting JSON to false
	I0520 03:43:33.185126    8881 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6184,"bootTime":1716195629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:43:33.185208    8881 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:43:33.190533    8881 out.go:177] * [bridge-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:43:33.197372    8881 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:43:33.200458    8881 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:43:33.197426    8881 notify.go:220] Checking for updates...
	I0520 03:43:33.207382    8881 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:43:33.213344    8881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:43:33.217327    8881 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:43:33.220391    8881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:43:33.224650    8881 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:43:33.224723    8881 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:43:33.224765    8881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:43:33.229376    8881 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:43:33.236262    8881 start.go:297] selected driver: qemu2
	I0520 03:43:33.236269    8881 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:43:33.236275    8881 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:43:33.238433    8881 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:43:33.242364    8881 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:43:33.246464    8881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:43:33.246480    8881 cni.go:84] Creating CNI manager for "bridge"
	I0520 03:43:33.246484    8881 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:43:33.246522    8881 start.go:340] cluster config:
	{Name:bridge-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:43:33.251098    8881 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:43:33.259401    8881 out.go:177] * Starting "bridge-225000" primary control-plane node in "bridge-225000" cluster
	I0520 03:43:33.263331    8881 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:43:33.263351    8881 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:43:33.263360    8881 cache.go:56] Caching tarball of preloaded images
	I0520 03:43:33.263422    8881 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:43:33.263427    8881 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:43:33.263490    8881 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/bridge-225000/config.json ...
	I0520 03:43:33.263501    8881 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/bridge-225000/config.json: {Name:mk76a872ad56005cc0d391a0b6ca3a6de5737503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:43:33.263722    8881 start.go:360] acquireMachinesLock for bridge-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:33.263760    8881 start.go:364] duration metric: took 31.541µs to acquireMachinesLock for "bridge-225000"
	I0520 03:43:33.263772    8881 start.go:93] Provisioning new machine with config: &{Name:bridge-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:33.263802    8881 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:33.272328    8881 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:33.289302    8881 start.go:159] libmachine.API.Create for "bridge-225000" (driver="qemu2")
	I0520 03:43:33.289332    8881 client.go:168] LocalClient.Create starting
	I0520 03:43:33.289402    8881 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:33.289431    8881 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:33.289440    8881 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:33.289482    8881 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:33.289506    8881 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:33.289515    8881 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:33.289848    8881 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:33.420164    8881 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:33.682182    8881 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:33.682193    8881 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:33.682398    8881 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2
	I0520 03:43:33.695085    8881 main.go:141] libmachine: STDOUT: 
	I0520 03:43:33.695108    8881 main.go:141] libmachine: STDERR: 
	I0520 03:43:33.695175    8881 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2 +20000M
	I0520 03:43:33.706269    8881 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:33.706285    8881 main.go:141] libmachine: STDERR: 
	I0520 03:43:33.706301    8881 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2
	I0520 03:43:33.706306    8881 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:33.706336    8881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:f9:37:f9:60:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2
	I0520 03:43:33.707994    8881 main.go:141] libmachine: STDOUT: 
	I0520 03:43:33.708009    8881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:33.708026    8881 client.go:171] duration metric: took 418.695084ms to LocalClient.Create
	I0520 03:43:35.710169    8881 start.go:128] duration metric: took 2.446381166s to createHost
	I0520 03:43:35.710226    8881 start.go:83] releasing machines lock for "bridge-225000", held for 2.446497s
	W0520 03:43:35.710313    8881 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:35.727309    8881 out.go:177] * Deleting "bridge-225000" in qemu2 ...
	W0520 03:43:35.748855    8881 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:35.748876    8881 start.go:728] Will try again in 5 seconds ...
	I0520 03:43:40.750581    8881 start.go:360] acquireMachinesLock for bridge-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:40.751228    8881 start.go:364] duration metric: took 501.375µs to acquireMachinesLock for "bridge-225000"
	I0520 03:43:40.751297    8881 start.go:93] Provisioning new machine with config: &{Name:bridge-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:40.751617    8881 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:40.761344    8881 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:40.812241    8881 start.go:159] libmachine.API.Create for "bridge-225000" (driver="qemu2")
	I0520 03:43:40.812297    8881 client.go:168] LocalClient.Create starting
	I0520 03:43:40.812411    8881 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:40.812482    8881 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:40.812503    8881 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:40.812570    8881 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:40.812614    8881 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:40.812626    8881 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:40.813158    8881 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:40.957902    8881 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:41.027233    8881 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:41.027243    8881 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:41.027564    8881 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2
	I0520 03:43:41.041510    8881 main.go:141] libmachine: STDOUT: 
	I0520 03:43:41.041533    8881 main.go:141] libmachine: STDERR: 
	I0520 03:43:41.041608    8881 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2 +20000M
	I0520 03:43:41.054247    8881 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:41.054274    8881 main.go:141] libmachine: STDERR: 
	I0520 03:43:41.054295    8881 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2
	I0520 03:43:41.054302    8881 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:41.054345    8881 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:d4:9c:29:fb:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/bridge-225000/disk.qcow2
	I0520 03:43:41.056381    8881 main.go:141] libmachine: STDOUT: 
	I0520 03:43:41.056402    8881 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:41.056432    8881 client.go:171] duration metric: took 244.132583ms to LocalClient.Create
	I0520 03:43:43.058502    8881 start.go:128] duration metric: took 2.306870875s to createHost
	I0520 03:43:43.058525    8881 start.go:83] releasing machines lock for "bridge-225000", held for 2.307313s
	W0520 03:43:43.059320    8881 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:43.065052    8881 out.go:177] 
	W0520 03:43:43.073288    8881 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:43:43.073309    8881 out.go:239] * 
	* 
	W0520 03:43:43.074505    8881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:43:43.085889    8881 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-225000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.713119958s)

                                                
                                                
-- stdout --
	* [kubenet-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-225000" primary control-plane node in "kubenet-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:43:45.245285    8992 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:43:45.245422    8992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:45.245425    8992 out.go:304] Setting ErrFile to fd 2...
	I0520 03:43:45.245428    8992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:45.245564    8992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:43:45.246686    8992 out.go:298] Setting JSON to false
	I0520 03:43:45.263412    8992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6196,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:43:45.263488    8992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:43:45.268029    8992 out.go:177] * [kubenet-225000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:43:45.275963    8992 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:43:45.276013    8992 notify.go:220] Checking for updates...
	I0520 03:43:45.278867    8992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:43:45.281939    8992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:43:45.284913    8992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:43:45.287926    8992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:43:45.290915    8992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:43:45.294236    8992 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:43:45.294300    8992 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:43:45.294346    8992 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:43:45.298899    8992 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:43:45.305949    8992 start.go:297] selected driver: qemu2
	I0520 03:43:45.305959    8992 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:43:45.305976    8992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:43:45.308225    8992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:43:45.310821    8992 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:43:45.314067    8992 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:43:45.314088    8992 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0520 03:43:45.314143    8992 start.go:340] cluster config:
	{Name:kubenet-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubenet-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:43:45.318280    8992 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:43:45.324901    8992 out.go:177] * Starting "kubenet-225000" primary control-plane node in "kubenet-225000" cluster
	I0520 03:43:45.328899    8992 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:43:45.328919    8992 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:43:45.328937    8992 cache.go:56] Caching tarball of preloaded images
	I0520 03:43:45.328998    8992 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:43:45.329003    8992 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:43:45.329073    8992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/kubenet-225000/config.json ...
	I0520 03:43:45.329087    8992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/kubenet-225000/config.json: {Name:mk3a1bb45b3e448e9f1cd92a07b13be842b82149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:43:45.329386    8992 start.go:360] acquireMachinesLock for kubenet-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:45.329417    8992 start.go:364] duration metric: took 25.917µs to acquireMachinesLock for "kubenet-225000"
	I0520 03:43:45.329428    8992 start.go:93] Provisioning new machine with config: &{Name:kubenet-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:45.329455    8992 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:45.333875    8992 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:45.349382    8992 start.go:159] libmachine.API.Create for "kubenet-225000" (driver="qemu2")
	I0520 03:43:45.349406    8992 client.go:168] LocalClient.Create starting
	I0520 03:43:45.349459    8992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:45.349487    8992 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:45.349502    8992 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:45.349539    8992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:45.349561    8992 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:45.349568    8992 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:45.349928    8992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:45.483326    8992 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:45.534595    8992 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:45.534601    8992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:45.534783    8992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2
	I0520 03:43:45.547218    8992 main.go:141] libmachine: STDOUT: 
	I0520 03:43:45.547238    8992 main.go:141] libmachine: STDERR: 
	I0520 03:43:45.547291    8992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2 +20000M
	I0520 03:43:45.558297    8992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:45.558312    8992 main.go:141] libmachine: STDERR: 
	I0520 03:43:45.558335    8992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2
	I0520 03:43:45.558340    8992 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:45.558374    8992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:10:74:96:95:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2
	I0520 03:43:45.560182    8992 main.go:141] libmachine: STDOUT: 
	I0520 03:43:45.560200    8992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:45.560217    8992 client.go:171] duration metric: took 210.808958ms to LocalClient.Create
	I0520 03:43:47.562427    8992 start.go:128] duration metric: took 2.232983625s to createHost
	I0520 03:43:47.562513    8992 start.go:83] releasing machines lock for "kubenet-225000", held for 2.233121459s
	W0520 03:43:47.562567    8992 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:47.573864    8992 out.go:177] * Deleting "kubenet-225000" in qemu2 ...
	W0520 03:43:47.598227    8992 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:47.598267    8992 start.go:728] Will try again in 5 seconds ...
	I0520 03:43:52.600506    8992 start.go:360] acquireMachinesLock for kubenet-225000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:52.601117    8992 start.go:364] duration metric: took 491.875µs to acquireMachinesLock for "kubenet-225000"
	I0520 03:43:52.601273    8992 start.go:93] Provisioning new machine with config: &{Name:kubenet-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:52.601652    8992 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:52.610103    8992 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 03:43:52.654704    8992 start.go:159] libmachine.API.Create for "kubenet-225000" (driver="qemu2")
	I0520 03:43:52.654756    8992 client.go:168] LocalClient.Create starting
	I0520 03:43:52.654869    8992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:52.654937    8992 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:52.654955    8992 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:52.655038    8992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:52.655082    8992 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:52.655098    8992 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:52.655667    8992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:52.800403    8992 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:52.871111    8992 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:52.871123    8992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:52.871308    8992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2
	I0520 03:43:52.884009    8992 main.go:141] libmachine: STDOUT: 
	I0520 03:43:52.884042    8992 main.go:141] libmachine: STDERR: 
	I0520 03:43:52.884095    8992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2 +20000M
	I0520 03:43:52.895454    8992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:52.895474    8992 main.go:141] libmachine: STDERR: 
	I0520 03:43:52.895484    8992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2
	I0520 03:43:52.895488    8992 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:52.895523    8992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:0a:51:32:59:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/kubenet-225000/disk.qcow2
	I0520 03:43:52.897314    8992 main.go:141] libmachine: STDOUT: 
	I0520 03:43:52.897332    8992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:52.897344    8992 client.go:171] duration metric: took 242.585042ms to LocalClient.Create
	I0520 03:43:54.899404    8992 start.go:128] duration metric: took 2.297766792s to createHost
	I0520 03:43:54.899433    8992 start.go:83] releasing machines lock for "kubenet-225000", held for 2.298333666s
	W0520 03:43:54.899517    8992 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:54.907684    8992 out.go:177] 
	W0520 03:43:54.911708    8992 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:43:54.911714    8992 out.go:239] * 
	* 
	W0520 03:43:54.912186    8992 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:43:54.923739    8992 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-215000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-215000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.682264917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-215000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-215000" primary control-plane node in "old-k8s-version-215000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-215000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:43:57.069503    9104 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:43:57.069646    9104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:57.069649    9104 out.go:304] Setting ErrFile to fd 2...
	I0520 03:43:57.069651    9104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:43:57.069775    9104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:43:57.070826    9104 out.go:298] Setting JSON to false
	I0520 03:43:57.087281    9104 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6208,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:43:57.087347    9104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:43:57.092330    9104 out.go:177] * [old-k8s-version-215000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:43:57.099275    9104 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:43:57.103192    9104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:43:57.099344    9104 notify.go:220] Checking for updates...
	I0520 03:43:57.109154    9104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:43:57.112254    9104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:43:57.120235    9104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:43:57.123261    9104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:43:57.126670    9104 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:43:57.126744    9104 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:43:57.126796    9104 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:43:57.131234    9104 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:43:57.138225    9104 start.go:297] selected driver: qemu2
	I0520 03:43:57.138232    9104 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:43:57.138239    9104 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:43:57.140641    9104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:43:57.144256    9104 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:43:57.147214    9104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:43:57.147230    9104 cni.go:84] Creating CNI manager for ""
	I0520 03:43:57.147237    9104 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 03:43:57.147274    9104 start.go:340] cluster config:
	{Name:old-k8s-version-215000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:43:57.152012    9104 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:43:57.160237    9104 out.go:177] * Starting "old-k8s-version-215000" primary control-plane node in "old-k8s-version-215000" cluster
	I0520 03:43:57.164202    9104 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:43:57.164221    9104 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 03:43:57.164233    9104 cache.go:56] Caching tarball of preloaded images
	I0520 03:43:57.164297    9104 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:43:57.164303    9104 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 03:43:57.164369    9104 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/old-k8s-version-215000/config.json ...
	I0520 03:43:57.164380    9104 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/old-k8s-version-215000/config.json: {Name:mk517d9ec5f609f3aaaa8aaefefd7c27d3d82809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:43:57.164635    9104 start.go:360] acquireMachinesLock for old-k8s-version-215000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:43:57.164667    9104 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "old-k8s-version-215000"
	I0520 03:43:57.164678    9104 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:43:57.164704    9104 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:43:57.173196    9104 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:43:57.188099    9104 start.go:159] libmachine.API.Create for "old-k8s-version-215000" (driver="qemu2")
	I0520 03:43:57.188126    9104 client.go:168] LocalClient.Create starting
	I0520 03:43:57.188200    9104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:43:57.188228    9104 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:57.188241    9104 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:57.188278    9104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:43:57.188300    9104 main.go:141] libmachine: Decoding PEM data...
	I0520 03:43:57.188307    9104 main.go:141] libmachine: Parsing certificate...
	I0520 03:43:57.188712    9104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:43:57.323667    9104 main.go:141] libmachine: Creating SSH key...
	I0520 03:43:57.373803    9104 main.go:141] libmachine: Creating Disk image...
	I0520 03:43:57.373808    9104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:43:57.373998    9104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:43:57.386524    9104 main.go:141] libmachine: STDOUT: 
	I0520 03:43:57.386544    9104 main.go:141] libmachine: STDERR: 
	I0520 03:43:57.386625    9104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2 +20000M
	I0520 03:43:57.397999    9104 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:43:57.398026    9104 main.go:141] libmachine: STDERR: 
	I0520 03:43:57.398045    9104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:43:57.398050    9104 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:43:57.398093    9104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:41:a6:88:55:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:43:57.400055    9104 main.go:141] libmachine: STDOUT: 
	I0520 03:43:57.400075    9104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:43:57.400109    9104 client.go:171] duration metric: took 211.98325ms to LocalClient.Create
	I0520 03:43:59.402204    9104 start.go:128] duration metric: took 2.23752175s to createHost
	I0520 03:43:59.402249    9104 start.go:83] releasing machines lock for "old-k8s-version-215000", held for 2.237610709s
	W0520 03:43:59.402310    9104 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:59.415620    9104 out.go:177] * Deleting "old-k8s-version-215000" in qemu2 ...
	W0520 03:43:59.432879    9104 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:43:59.432896    9104 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:04.434906    9104 start.go:360] acquireMachinesLock for old-k8s-version-215000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:04.435051    9104 start.go:364] duration metric: took 108.625µs to acquireMachinesLock for "old-k8s-version-215000"
	I0520 03:44:04.435086    9104 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:04.435135    9104 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:04.444346    9104 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:04.459745    9104 start.go:159] libmachine.API.Create for "old-k8s-version-215000" (driver="qemu2")
	I0520 03:44:04.459778    9104 client.go:168] LocalClient.Create starting
	I0520 03:44:04.459837    9104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:04.459869    9104 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:04.459879    9104 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:04.459910    9104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:04.459932    9104 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:04.459937    9104 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:04.460240    9104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:04.596098    9104 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:04.649611    9104 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:04.649616    9104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:04.649821    9104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:44:04.662326    9104 main.go:141] libmachine: STDOUT: 
	I0520 03:44:04.662346    9104 main.go:141] libmachine: STDERR: 
	I0520 03:44:04.662401    9104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2 +20000M
	I0520 03:44:04.673716    9104 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:04.673746    9104 main.go:141] libmachine: STDERR: 
	I0520 03:44:04.673761    9104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:44:04.673768    9104 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:04.673797    9104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b9:4a:d2:03:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:44:04.675856    9104 main.go:141] libmachine: STDOUT: 
	I0520 03:44:04.675873    9104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:04.675886    9104 client.go:171] duration metric: took 216.108208ms to LocalClient.Create
	I0520 03:44:06.678195    9104 start.go:128] duration metric: took 2.243060791s to createHost
	I0520 03:44:06.678313    9104 start.go:83] releasing machines lock for "old-k8s-version-215000", held for 2.243282584s
	W0520 03:44:06.678865    9104 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:06.693475    9104 out.go:177] 
	W0520 03:44:06.696618    9104 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:06.696650    9104 out.go:239] * 
	* 
	W0520 03:44:06.699045    9104 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:06.713444    9104 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-215000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (64.993958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-215000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-215000 create -f testdata/busybox.yaml: exit status 1 (31.045791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-215000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-215000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (29.152375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (28.214959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-215000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-215000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-215000 describe deploy/metrics-server -n kube-system: exit status 1 (27.264792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-215000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-215000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (28.703209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-215000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-215000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.192621708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-215000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-215000" primary control-plane node in "old-k8s-version-215000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-215000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-215000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:10.613453    9160 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:10.613593    9160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:10.613596    9160 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:10.613599    9160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:10.613731    9160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:10.614758    9160 out.go:298] Setting JSON to false
	I0520 03:44:10.630753    9160 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6221,"bootTime":1716195629,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:10.630822    9160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:10.635127    9160 out.go:177] * [old-k8s-version-215000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:10.643073    9160 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:10.647096    9160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:10.643112    9160 notify.go:220] Checking for updates...
	I0520 03:44:10.654036    9160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:10.657098    9160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:10.660180    9160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:10.663032    9160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:10.666403    9160 config.go:182] Loaded profile config "old-k8s-version-215000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 03:44:10.670009    9160 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 03:44:10.673047    9160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:10.677096    9160 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:44:10.684094    9160 start.go:297] selected driver: qemu2
	I0520 03:44:10.684102    9160 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:10.684169    9160 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:10.686474    9160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:44:10.686502    9160 cni.go:84] Creating CNI manager for ""
	I0520 03:44:10.686511    9160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 03:44:10.686538    9160 start.go:340] cluster config:
	{Name:old-k8s-version-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-215000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:10.691203    9160 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:10.696029    9160 out.go:177] * Starting "old-k8s-version-215000" primary control-plane node in "old-k8s-version-215000" cluster
	I0520 03:44:10.700113    9160 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:44:10.700130    9160 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 03:44:10.700145    9160 cache.go:56] Caching tarball of preloaded images
	I0520 03:44:10.700201    9160 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:44:10.700206    9160 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 03:44:10.700266    9160 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/old-k8s-version-215000/config.json ...
	I0520 03:44:10.700660    9160 start.go:360] acquireMachinesLock for old-k8s-version-215000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:10.700685    9160 start.go:364] duration metric: took 19.791µs to acquireMachinesLock for "old-k8s-version-215000"
	I0520 03:44:10.700694    9160 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:10.700700    9160 fix.go:54] fixHost starting: 
	I0520 03:44:10.700805    9160 fix.go:112] recreateIfNeeded on old-k8s-version-215000: state=Stopped err=<nil>
	W0520 03:44:10.700813    9160 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:10.705040    9160 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-215000" ...
	I0520 03:44:10.713038    9160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b9:4a:d2:03:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:44:10.715020    9160 main.go:141] libmachine: STDOUT: 
	I0520 03:44:10.715038    9160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:10.715063    9160 fix.go:56] duration metric: took 14.362625ms for fixHost
	I0520 03:44:10.715068    9160 start.go:83] releasing machines lock for "old-k8s-version-215000", held for 14.379458ms
	W0520 03:44:10.715073    9160 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:10.715108    9160 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:10.715112    9160 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:15.717346    9160 start.go:360] acquireMachinesLock for old-k8s-version-215000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:15.717962    9160 start.go:364] duration metric: took 468.917µs to acquireMachinesLock for "old-k8s-version-215000"
	I0520 03:44:15.718059    9160 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:15.718081    9160 fix.go:54] fixHost starting: 
	I0520 03:44:15.718877    9160 fix.go:112] recreateIfNeeded on old-k8s-version-215000: state=Stopped err=<nil>
	W0520 03:44:15.718904    9160 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:15.726401    9160 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-215000" ...
	I0520 03:44:15.730625    9160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b9:4a:d2:03:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/old-k8s-version-215000/disk.qcow2
	I0520 03:44:15.738663    9160 main.go:141] libmachine: STDOUT: 
	I0520 03:44:15.738727    9160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:15.738800    9160 fix.go:56] duration metric: took 20.723375ms for fixHost
	I0520 03:44:15.738816    9160 start.go:83] releasing machines lock for "old-k8s-version-215000", held for 20.831916ms
	W0520 03:44:15.739017    9160 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-215000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-215000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:15.751483    9160 out.go:177] 
	W0520 03:44:15.755511    9160 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:15.755565    9160 out.go:239] * 
	* 
	W0520 03:44:15.757118    9160 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:15.766511    9160 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-215000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (58.082709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-215000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (30.59475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-215000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-215000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-215000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.944916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-215000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-215000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (28.904625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-215000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (28.181375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-215000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-215000 --alsologtostderr -v=1: exit status 83 (40.99825ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-215000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-215000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:16.022178    9179 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:16.023106    9179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:16.023110    9179 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:16.023112    9179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:16.023275    9179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:16.023505    9179 out.go:298] Setting JSON to false
	I0520 03:44:16.023510    9179 mustload.go:65] Loading cluster: old-k8s-version-215000
	I0520 03:44:16.023710    9179 config.go:182] Loaded profile config "old-k8s-version-215000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 03:44:16.028474    9179 out.go:177] * The control-plane node old-k8s-version-215000 host is not running: state=Stopped
	I0520 03:44:16.031435    9179 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-215000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-215000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (28.209667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (28.106792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-215000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.821738125s)

                                                
                                                
-- stdout --
	* [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-828000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:16.474501    9202 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:16.474657    9202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:16.474660    9202 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:16.474663    9202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:16.474784    9202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:16.475864    9202 out.go:298] Setting JSON to false
	I0520 03:44:16.492853    9202 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6227,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:16.492916    9202 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:16.496881    9202 out.go:177] * [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:16.503764    9202 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:16.507837    9202 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:16.503882    9202 notify.go:220] Checking for updates...
	I0520 03:44:16.510860    9202 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:16.513869    9202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:16.516837    9202 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:16.519823    9202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:16.523084    9202 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:16.523154    9202 config.go:182] Loaded profile config "stopped-upgrade-555000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0520 03:44:16.523197    9202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:16.527811    9202 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:44:16.534762    9202 start.go:297] selected driver: qemu2
	I0520 03:44:16.534768    9202 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:44:16.534784    9202 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:16.537011    9202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:44:16.539731    9202 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:44:16.542936    9202 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:44:16.542957    9202 cni.go:84] Creating CNI manager for ""
	I0520 03:44:16.542965    9202 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:16.542968    9202 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:44:16.543008    9202 start.go:340] cluster config:
	{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:16.547150    9202 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.554829    9202 out.go:177] * Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	I0520 03:44:16.558644    9202 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:16.558699    9202 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/no-preload-828000/config.json ...
	I0520 03:44:16.558712    9202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/no-preload-828000/config.json: {Name:mkc2415d86c12968de3225654a487a1f393e0edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:44:16.558737    9202 cache.go:107] acquiring lock: {Name:mk66345377435c370bdd94262cb2f18321c8806b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.558734    9202 cache.go:107] acquiring lock: {Name:mk134c8ee9c54e0292cb1e410c67a6daa08bf67e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.558758    9202 cache.go:107] acquiring lock: {Name:mk3516480846a3cde4b08e882ac8306ca657e639 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.558836    9202 cache.go:107] acquiring lock: {Name:mkc7c93688d50c155464dfe8d1de6e750b397532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.558876    9202 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 03:44:16.558785    9202 cache.go:107] acquiring lock: {Name:mke5637c9fd9561f4de7d8a6efb1357ab2f38ace Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.558950    9202 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:16.558966    9202 cache.go:107] acquiring lock: {Name:mka7c4dea52352a1c9db081cc36aa609a6151a8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.558964    9202 cache.go:107] acquiring lock: {Name:mkb95162f4d9695bc32e527233bcde8466840588 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.558993    9202 cache.go:107] acquiring lock: {Name:mk4bf97a9ef80243929e1132b65f3fdc920de30b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:16.559052    9202 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 03:44:16.559078    9202 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 03:44:16.559086    9202 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 03:44:16.559095    9202 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 360.208µs
	I0520 03:44:16.559103    9202 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 03:44:16.559131    9202 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 03:44:16.559133    9202 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 03:44:16.559164    9202 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 03:44:16.559164    9202 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 03:44:16.558986    9202 start.go:364] duration metric: took 30.166µs to acquireMachinesLock for "no-preload-828000"
	I0520 03:44:16.559228    9202 start.go:93] Provisioning new machine with config: &{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:16.559277    9202 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:16.567727    9202 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:16.570809    9202 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 03:44:16.571030    9202 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 03:44:16.571587    9202 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 03:44:16.573369    9202 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 03:44:16.573753    9202 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 03:44:16.573793    9202 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 03:44:16.573874    9202 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 03:44:16.583297    9202 start.go:159] libmachine.API.Create for "no-preload-828000" (driver="qemu2")
	I0520 03:44:16.583321    9202 client.go:168] LocalClient.Create starting
	I0520 03:44:16.583417    9202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:16.583451    9202 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:16.583461    9202 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:16.583521    9202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:16.583545    9202 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:16.583551    9202 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:16.583965    9202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:16.724209    9202 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:16.788761    9202 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:16.788786    9202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:16.789000    9202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:16.802464    9202 main.go:141] libmachine: STDOUT: 
	I0520 03:44:16.802513    9202 main.go:141] libmachine: STDERR: 
	I0520 03:44:16.802562    9202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2 +20000M
	I0520 03:44:16.815175    9202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:16.815194    9202 main.go:141] libmachine: STDERR: 
	I0520 03:44:16.815206    9202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:16.815210    9202 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:16.815245    9202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:a5:01:49:75:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:16.817335    9202 main.go:141] libmachine: STDOUT: 
	I0520 03:44:16.817361    9202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:16.817380    9202 client.go:171] duration metric: took 234.059208ms to LocalClient.Create
	I0520 03:44:16.919278    9202 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0520 03:44:16.938737    9202 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 03:44:16.964866    9202 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 03:44:16.969087    9202 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 03:44:16.997917    9202 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 03:44:17.035857    9202 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0520 03:44:17.040519    9202 cache.go:162] opening:  /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 03:44:17.065027    9202 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 03:44:17.065040    9202 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 506.144541ms
	I0520 03:44:17.065046    9202 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 03:44:18.817570    9202 start.go:128] duration metric: took 2.258310709s to createHost
	I0520 03:44:18.817602    9202 start.go:83] releasing machines lock for "no-preload-828000", held for 2.258419416s
	W0520 03:44:18.817629    9202 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:18.828813    9202 out.go:177] * Deleting "no-preload-828000" in qemu2 ...
	W0520 03:44:18.846531    9202 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:18.846551    9202 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:19.850537    9202 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 03:44:19.850550    9202 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 3.291608209s
	I0520 03:44:19.850556    9202 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 03:44:20.711937    9202 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 03:44:20.711994    9202 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 4.153297375s
	I0520 03:44:20.712014    9202 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 03:44:21.061843    9202 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 03:44:21.061879    9202 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.503126917s
	I0520 03:44:21.061902    9202 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 03:44:21.070287    9202 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 03:44:21.070325    9202 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 4.511447083s
	I0520 03:44:21.070343    9202 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 03:44:22.173518    9202 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 03:44:22.173542    9202 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 5.614900166s
	I0520 03:44:22.173553    9202 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 03:44:23.846764    9202 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:23.847202    9202 start.go:364] duration metric: took 355.791µs to acquireMachinesLock for "no-preload-828000"
	I0520 03:44:23.847324    9202 start.go:93] Provisioning new machine with config: &{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:23.847565    9202 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:23.853326    9202 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:23.902800    9202 start.go:159] libmachine.API.Create for "no-preload-828000" (driver="qemu2")
	I0520 03:44:23.902849    9202 client.go:168] LocalClient.Create starting
	I0520 03:44:23.902999    9202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:23.903084    9202 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:23.903105    9202 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:23.903178    9202 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:23.903231    9202 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:23.903252    9202 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:23.903777    9202 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:24.049345    9202 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:24.196916    9202 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:24.196924    9202 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:24.197127    9202 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:24.209899    9202 main.go:141] libmachine: STDOUT: 
	I0520 03:44:24.209929    9202 main.go:141] libmachine: STDERR: 
	I0520 03:44:24.209990    9202 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2 +20000M
	I0520 03:44:24.221803    9202 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:24.221821    9202 main.go:141] libmachine: STDERR: 
	I0520 03:44:24.221838    9202 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:24.221841    9202 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:24.221894    9202 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a3:10:72:0c:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:24.223804    9202 main.go:141] libmachine: STDOUT: 
	I0520 03:44:24.223821    9202 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:24.223836    9202 client.go:171] duration metric: took 320.987583ms to LocalClient.Create
	I0520 03:44:25.488777    9202 cache.go:157] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 03:44:25.488837    9202 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.930191625s
	I0520 03:44:25.488863    9202 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 03:44:25.488894    9202 cache.go:87] Successfully saved all images to host disk.
	I0520 03:44:26.224753    9202 start.go:128] duration metric: took 2.377153834s to createHost
	I0520 03:44:26.224847    9202 start.go:83] releasing machines lock for "no-preload-828000", held for 2.3776585s
	W0520 03:44:26.225209    9202 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:26.235785    9202 out.go:177] 
	W0520 03:44:26.241946    9202 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:26.241982    9202 out.go:239] * 
	* 
	W0520 03:44:26.244716    9202 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:26.254849    9202 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (65.449583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-828000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-828000 create -f testdata/busybox.yaml: exit status 1 (30.337833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-828000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-828000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.286208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.173916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-828000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-828000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-828000 describe deploy/metrics-server -n kube-system: exit status 1 (27.387667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-828000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-828000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.830708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.181361667s)

                                                
                                                
-- stdout --
	* [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	* Restarting existing qemu2 VM for "no-preload-828000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-828000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:30.004638    9285 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:30.004784    9285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:30.004788    9285 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:30.004790    9285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:30.004923    9285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:30.006018    9285 out.go:298] Setting JSON to false
	I0520 03:44:30.022449    9285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6241,"bootTime":1716195629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:30.022517    9285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:30.027727    9285 out.go:177] * [no-preload-828000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:30.034694    9285 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:30.037715    9285 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:30.034757    9285 notify.go:220] Checking for updates...
	I0520 03:44:30.044598    9285 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:30.047639    9285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:30.050649    9285 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:30.053656    9285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:30.056875    9285 config.go:182] Loaded profile config "no-preload-828000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:30.057139    9285 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:30.060644    9285 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:44:30.067712    9285 start.go:297] selected driver: qemu2
	I0520 03:44:30.067722    9285 start.go:901] validating driver "qemu2" against &{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:30.067852    9285 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:30.070070    9285 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:44:30.070093    9285 cni.go:84] Creating CNI manager for ""
	I0520 03:44:30.070100    9285 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:30.070122    9285 start.go:340] cluster config:
	{Name:no-preload-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-828000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:30.074247    9285 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.081629    9285 out.go:177] * Starting "no-preload-828000" primary control-plane node in "no-preload-828000" cluster
	I0520 03:44:30.085570    9285 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:30.085661    9285 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/no-preload-828000/config.json ...
	I0520 03:44:30.085679    9285 cache.go:107] acquiring lock: {Name:mk66345377435c370bdd94262cb2f18321c8806b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085679    9285 cache.go:107] acquiring lock: {Name:mk134c8ee9c54e0292cb1e410c67a6daa08bf67e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085699    9285 cache.go:107] acquiring lock: {Name:mka7c4dea52352a1c9db081cc36aa609a6151a8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085741    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 03:44:30.085747    9285 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 68.875µs
	I0520 03:44:30.085753    9285 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 03:44:30.085756    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 03:44:30.085756    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 03:44:30.085762    9285 cache.go:107] acquiring lock: {Name:mk4bf97a9ef80243929e1132b65f3fdc920de30b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085766    9285 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 81.542µs
	I0520 03:44:30.085770    9285 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 03:44:30.085768    9285 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 96.25µs
	I0520 03:44:30.085776    9285 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 03:44:30.085777    9285 cache.go:107] acquiring lock: {Name:mk3516480846a3cde4b08e882ac8306ca657e639 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085789    9285 cache.go:107] acquiring lock: {Name:mke5637c9fd9561f4de7d8a6efb1357ab2f38ace Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085796    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 03:44:30.085830    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 03:44:30.085832    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 03:44:30.085833    9285 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 56.791µs
	I0520 03:44:30.085837    9285 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 03:44:30.085836    9285 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 47.542µs
	I0520 03:44:30.085842    9285 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 03:44:30.085838    9285 cache.go:107] acquiring lock: {Name:mkb95162f4d9695bc32e527233bcde8466840588 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085873    9285 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 51.416µs
	I0520 03:44:30.085887    9285 cache.go:107] acquiring lock: {Name:mkc7c93688d50c155464dfe8d1de6e750b397532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:30.085888    9285 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 03:44:30.085908    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0520 03:44:30.085912    9285 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 90.75µs
	I0520 03:44:30.085919    9285 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0520 03:44:30.085929    9285 cache.go:115] /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 03:44:30.085933    9285 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 78.292µs
	I0520 03:44:30.085940    9285 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 03:44:30.085945    9285 cache.go:87] Successfully saved all images to host disk.
	I0520 03:44:30.086099    9285 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:30.086127    9285 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "no-preload-828000"
	I0520 03:44:30.086136    9285 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:30.086142    9285 fix.go:54] fixHost starting: 
	I0520 03:44:30.086243    9285 fix.go:112] recreateIfNeeded on no-preload-828000: state=Stopped err=<nil>
	W0520 03:44:30.086251    9285 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:30.094621    9285 out.go:177] * Restarting existing qemu2 VM for "no-preload-828000" ...
	I0520 03:44:30.098671    9285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a3:10:72:0c:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:30.100541    9285 main.go:141] libmachine: STDOUT: 
	I0520 03:44:30.100560    9285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:30.100585    9285 fix.go:56] duration metric: took 14.443334ms for fixHost
	I0520 03:44:30.100588    9285 start.go:83] releasing machines lock for "no-preload-828000", held for 14.45725ms
	W0520 03:44:30.100593    9285 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:30.100620    9285 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:30.100624    9285 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:35.102805    9285 start.go:360] acquireMachinesLock for no-preload-828000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:35.103384    9285 start.go:364] duration metric: took 477.959µs to acquireMachinesLock for "no-preload-828000"
	I0520 03:44:35.103560    9285 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:35.103582    9285 fix.go:54] fixHost starting: 
	I0520 03:44:35.104339    9285 fix.go:112] recreateIfNeeded on no-preload-828000: state=Stopped err=<nil>
	W0520 03:44:35.104368    9285 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:35.112783    9285 out.go:177] * Restarting existing qemu2 VM for "no-preload-828000" ...
	I0520 03:44:35.117205    9285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a3:10:72:0c:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/no-preload-828000/disk.qcow2
	I0520 03:44:35.127306    9285 main.go:141] libmachine: STDOUT: 
	I0520 03:44:35.127417    9285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:35.127496    9285 fix.go:56] duration metric: took 23.918084ms for fixHost
	I0520 03:44:35.127510    9285 start.go:83] releasing machines lock for "no-preload-828000", held for 24.097417ms
	W0520 03:44:35.127655    9285 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-828000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:35.135721    9285 out.go:177] 
	W0520 03:44:35.138795    9285 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:35.138813    9285 out.go:239] * 
	* 
	W0520 03:44:35.140272    9285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:35.147016    9285 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-828000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (59.383625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-828000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (30.564959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-828000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-828000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-828000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.674292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-828000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-828000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.683916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-828000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.98375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-828000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-828000 --alsologtostderr -v=1: exit status 83 (40.681ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-828000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-828000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:35.403221    9305 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:35.403385    9305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:35.403389    9305 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:35.403391    9305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:35.403513    9305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:35.403728    9305 out.go:298] Setting JSON to false
	I0520 03:44:35.403734    9305 mustload.go:65] Loading cluster: no-preload-828000
	I0520 03:44:35.403922    9305 config.go:182] Loaded profile config "no-preload-828000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:35.408634    9305 out.go:177] * The control-plane node no-preload-828000 host is not running: state=Stopped
	I0520 03:44:35.412811    9305 out.go:177]   To start a cluster, run: "minikube start -p no-preload-828000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-828000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.8735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (28.308542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-828000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-588000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-588000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.908356s)

                                                
                                                
-- stdout --
	* [embed-certs-588000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-588000" primary control-plane node in "embed-certs-588000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-588000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:35.672812    9323 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:35.672960    9323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:35.672963    9323 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:35.672966    9323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:35.673098    9323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:35.674349    9323 out.go:298] Setting JSON to false
	I0520 03:44:35.692020    9323 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6246,"bootTime":1716195629,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:35.692084    9323 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:35.696465    9323 out.go:177] * [embed-certs-588000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:35.703449    9323 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:35.706484    9323 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:35.703446    9323 notify.go:220] Checking for updates...
	I0520 03:44:35.710589    9323 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:35.713497    9323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:35.716521    9323 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:35.719477    9323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:35.722842    9323 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:35.722886    9323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:35.725494    9323 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:44:35.732431    9323 start.go:297] selected driver: qemu2
	I0520 03:44:35.732437    9323 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:44:35.732442    9323 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:35.734727    9323 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:44:35.738511    9323 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:44:35.741469    9323 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:44:35.741486    9323 cni.go:84] Creating CNI manager for ""
	I0520 03:44:35.741493    9323 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:35.741503    9323 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:44:35.741540    9323 start.go:340] cluster config:
	{Name:embed-certs-588000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:35.746221    9323 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:35.753472    9323 out.go:177] * Starting "embed-certs-588000" primary control-plane node in "embed-certs-588000" cluster
	I0520 03:44:35.757430    9323 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:35.757453    9323 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:44:35.757468    9323 cache.go:56] Caching tarball of preloaded images
	I0520 03:44:35.757533    9323 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:44:35.757539    9323 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:44:35.757591    9323 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/embed-certs-588000/config.json ...
	I0520 03:44:35.757603    9323 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/embed-certs-588000/config.json: {Name:mk34346efa9c41759a4534cf9f17d5789ccc38e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:44:35.757853    9323 start.go:360] acquireMachinesLock for embed-certs-588000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:35.757884    9323 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "embed-certs-588000"
	I0520 03:44:35.757895    9323 start.go:93] Provisioning new machine with config: &{Name:embed-certs-588000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:35.757920    9323 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:35.765478    9323 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:35.781284    9323 start.go:159] libmachine.API.Create for "embed-certs-588000" (driver="qemu2")
	I0520 03:44:35.781315    9323 client.go:168] LocalClient.Create starting
	I0520 03:44:35.781398    9323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:35.781435    9323 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:35.781447    9323 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:35.781489    9323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:35.781511    9323 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:35.781520    9323 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:35.781897    9323 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:35.957937    9323 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:36.059815    9323 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:36.059826    9323 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:36.059999    9323 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:36.073479    9323 main.go:141] libmachine: STDOUT: 
	I0520 03:44:36.073506    9323 main.go:141] libmachine: STDERR: 
	I0520 03:44:36.073585    9323 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2 +20000M
	I0520 03:44:36.085615    9323 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:36.085638    9323 main.go:141] libmachine: STDERR: 
	I0520 03:44:36.085656    9323 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:36.085662    9323 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:36.085704    9323 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:9e:a1:62:71:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:36.087676    9323 main.go:141] libmachine: STDOUT: 
	I0520 03:44:36.087695    9323 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:36.087714    9323 client.go:171] duration metric: took 306.399292ms to LocalClient.Create
	I0520 03:44:38.089884    9323 start.go:128] duration metric: took 2.331978541s to createHost
	I0520 03:44:38.090063    9323 start.go:83] releasing machines lock for "embed-certs-588000", held for 2.332106833s
	W0520 03:44:38.090139    9323 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:38.107009    9323 out.go:177] * Deleting "embed-certs-588000" in qemu2 ...
	W0520 03:44:38.126724    9323 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:38.126748    9323 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:43.128849    9323 start.go:360] acquireMachinesLock for embed-certs-588000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:43.129366    9323 start.go:364] duration metric: took 427.542µs to acquireMachinesLock for "embed-certs-588000"
	I0520 03:44:43.129516    9323 start.go:93] Provisioning new machine with config: &{Name:embed-certs-588000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:43.129840    9323 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:43.138168    9323 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:43.190113    9323 start.go:159] libmachine.API.Create for "embed-certs-588000" (driver="qemu2")
	I0520 03:44:43.190176    9323 client.go:168] LocalClient.Create starting
	I0520 03:44:43.190295    9323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:43.190357    9323 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:43.190373    9323 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:43.190440    9323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:43.190483    9323 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:43.190495    9323 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:43.191019    9323 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:43.339821    9323 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:43.484450    9323 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:43.484456    9323 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:43.484648    9323 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:43.497710    9323 main.go:141] libmachine: STDOUT: 
	I0520 03:44:43.497735    9323 main.go:141] libmachine: STDERR: 
	I0520 03:44:43.497796    9323 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2 +20000M
	I0520 03:44:43.508907    9323 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:43.508930    9323 main.go:141] libmachine: STDERR: 
	I0520 03:44:43.508940    9323 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:43.508945    9323 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:43.508975    9323 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:b5:a8:48:77:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:43.510692    9323 main.go:141] libmachine: STDOUT: 
	I0520 03:44:43.510712    9323 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:43.510724    9323 client.go:171] duration metric: took 320.546833ms to LocalClient.Create
	I0520 03:44:45.512943    9323 start.go:128] duration metric: took 2.383084959s to createHost
	I0520 03:44:45.512988    9323 start.go:83] releasing machines lock for "embed-certs-588000", held for 2.383636917s
	W0520 03:44:45.513320    9323 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-588000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-588000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:45.527897    9323 out.go:177] 
	W0520 03:44:45.533945    9323 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:45.533974    9323 out.go:239] * 
	* 
	W0520 03:44:45.536614    9323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:45.543792    9323 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-588000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (50.894042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-351000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-351000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (11.765315709s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-351000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-351000" primary control-plane node in "default-k8s-diff-port-351000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-351000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:36.174257    9354 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:36.174394    9354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:36.174397    9354 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:36.174399    9354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:36.174522    9354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:36.175619    9354 out.go:298] Setting JSON to false
	I0520 03:44:36.191628    9354 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6247,"bootTime":1716195629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:36.191691    9354 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:36.196626    9354 out.go:177] * [default-k8s-diff-port-351000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:36.203554    9354 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:36.203610    9354 notify.go:220] Checking for updates...
	I0520 03:44:36.210463    9354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:36.213519    9354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:36.216487    9354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:36.219530    9354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:36.222494    9354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:36.225854    9354 config.go:182] Loaded profile config "embed-certs-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:36.225916    9354 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:36.225975    9354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:36.230432    9354 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:44:36.237516    9354 start.go:297] selected driver: qemu2
	I0520 03:44:36.237524    9354 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:44:36.237532    9354 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:36.239763    9354 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:44:36.243439    9354 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:44:36.246534    9354 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:44:36.246549    9354 cni.go:84] Creating CNI manager for ""
	I0520 03:44:36.246556    9354 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:36.246560    9354 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:44:36.246602    9354 start.go:340] cluster config:
	{Name:default-k8s-diff-port-351000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-351000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:36.251089    9354 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:36.256486    9354 out.go:177] * Starting "default-k8s-diff-port-351000" primary control-plane node in "default-k8s-diff-port-351000" cluster
	I0520 03:44:36.260485    9354 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:36.260499    9354 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:44:36.260507    9354 cache.go:56] Caching tarball of preloaded images
	I0520 03:44:36.260558    9354 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:44:36.260564    9354 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:44:36.260617    9354 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/default-k8s-diff-port-351000/config.json ...
	I0520 03:44:36.260628    9354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/default-k8s-diff-port-351000/config.json: {Name:mk029fe7a42a3e2947a5585e3e5b58baa1c54125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:44:36.260975    9354 start.go:360] acquireMachinesLock for default-k8s-diff-port-351000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:38.090206    9354 start.go:364] duration metric: took 1.829222125s to acquireMachinesLock for "default-k8s-diff-port-351000"
	I0520 03:44:38.090376    9354 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-351000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-351000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:38.090617    9354 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:38.100061    9354 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:38.150260    9354 start.go:159] libmachine.API.Create for "default-k8s-diff-port-351000" (driver="qemu2")
	I0520 03:44:38.150313    9354 client.go:168] LocalClient.Create starting
	I0520 03:44:38.150455    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:38.150567    9354 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:38.150592    9354 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:38.150661    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:38.150706    9354 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:38.150718    9354 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:38.151429    9354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:38.312695    9354 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:38.346329    9354 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:38.346334    9354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:38.346538    9354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:38.359291    9354 main.go:141] libmachine: STDOUT: 
	I0520 03:44:38.359313    9354 main.go:141] libmachine: STDERR: 
	I0520 03:44:38.359380    9354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2 +20000M
	I0520 03:44:38.370399    9354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:38.370414    9354 main.go:141] libmachine: STDERR: 
	I0520 03:44:38.370437    9354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:38.370441    9354 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:38.370472    9354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:37:3f:51:75:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:38.372170    9354 main.go:141] libmachine: STDOUT: 
	I0520 03:44:38.372187    9354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:38.372207    9354 client.go:171] duration metric: took 221.891041ms to LocalClient.Create
	I0520 03:44:40.372856    9354 start.go:128] duration metric: took 2.282251167s to createHost
	I0520 03:44:40.372901    9354 start.go:83] releasing machines lock for "default-k8s-diff-port-351000", held for 2.28269475s
	W0520 03:44:40.372969    9354 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:40.383158    9354 out.go:177] * Deleting "default-k8s-diff-port-351000" in qemu2 ...
	W0520 03:44:40.412406    9354 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:40.412439    9354 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:45.414615    9354 start.go:360] acquireMachinesLock for default-k8s-diff-port-351000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:45.513101    9354 start.go:364] duration metric: took 98.272542ms to acquireMachinesLock for "default-k8s-diff-port-351000"
	I0520 03:44:45.513258    9354 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-351000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-351000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:45.513510    9354 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:45.524865    9354 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:45.575061    9354 start.go:159] libmachine.API.Create for "default-k8s-diff-port-351000" (driver="qemu2")
	I0520 03:44:45.575112    9354 client.go:168] LocalClient.Create starting
	I0520 03:44:45.575203    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:45.575258    9354 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:45.575273    9354 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:45.575330    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:45.575359    9354 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:45.575371    9354 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:45.575847    9354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:45.726690    9354 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:45.840452    9354 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:45.840464    9354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:45.840660    9354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:45.854028    9354 main.go:141] libmachine: STDOUT: 
	I0520 03:44:45.854049    9354 main.go:141] libmachine: STDERR: 
	I0520 03:44:45.854111    9354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2 +20000M
	I0520 03:44:45.869753    9354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:45.869778    9354 main.go:141] libmachine: STDERR: 
	I0520 03:44:45.869789    9354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:45.869794    9354 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:45.869839    9354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c4:54:76:7f:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:45.871624    9354 main.go:141] libmachine: STDOUT: 
	I0520 03:44:45.871642    9354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:45.871655    9354 client.go:171] duration metric: took 296.542083ms to LocalClient.Create
	I0520 03:44:47.873911    9354 start.go:128] duration metric: took 2.360393792s to createHost
	I0520 03:44:47.874041    9354 start.go:83] releasing machines lock for "default-k8s-diff-port-351000", held for 2.360873583s
	W0520 03:44:47.874420    9354 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-351000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-351000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:47.880181    9354 out.go:177] 
	W0520 03:44:47.887127    9354 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:47.887170    9354 out.go:239] * 
	* 
	W0520 03:44:47.889753    9354 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:47.899061    9354 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-351000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (64.771917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-588000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-588000 create -f testdata/busybox.yaml: exit status 1 (31.457292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-588000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-588000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (32.559209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (32.499542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-588000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-588000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-588000 describe deploy/metrics-server -n kube-system: exit status 1 (27.997667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-588000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-588000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (28.922542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-351000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-351000 create -f testdata/busybox.yaml: exit status 1 (29.432792ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-351000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-351000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (28.18075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (27.828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-351000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-351000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-351000 describe deploy/metrics-server -n kube-system: exit status 1 (26.777166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-351000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-351000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (28.322084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-588000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-588000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.1780675s)

                                                
                                                
-- stdout --
	* [embed-certs-588000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-588000" primary control-plane node in "embed-certs-588000" cluster
	* Restarting existing qemu2 VM for "embed-certs-588000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-588000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:49.686926    9427 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:49.687071    9427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:49.687074    9427 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:49.687077    9427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:49.687212    9427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:49.688198    9427 out.go:298] Setting JSON to false
	I0520 03:44:49.704131    9427 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6260,"bootTime":1716195629,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:49.704214    9427 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:49.709549    9427 out.go:177] * [embed-certs-588000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:49.717558    9427 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:49.721559    9427 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:49.717587    9427 notify.go:220] Checking for updates...
	I0520 03:44:49.726859    9427 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:49.729598    9427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:49.732582    9427 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:49.735540    9427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:49.738843    9427 config.go:182] Loaded profile config "embed-certs-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:49.739130    9427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:49.743518    9427 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:44:49.750517    9427 start.go:297] selected driver: qemu2
	I0520 03:44:49.750524    9427 start.go:901] validating driver "qemu2" against &{Name:embed-certs-588000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:embed-certs-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:49.750586    9427 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:49.752973    9427 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:44:49.752995    9427 cni.go:84] Creating CNI manager for ""
	I0520 03:44:49.753002    9427 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:49.753022    9427 start.go:340] cluster config:
	{Name:embed-certs-588000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-588000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:49.757304    9427 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:49.763474    9427 out.go:177] * Starting "embed-certs-588000" primary control-plane node in "embed-certs-588000" cluster
	I0520 03:44:49.767516    9427 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:49.767532    9427 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:44:49.767540    9427 cache.go:56] Caching tarball of preloaded images
	I0520 03:44:49.767596    9427 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:44:49.767601    9427 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:44:49.767654    9427 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/embed-certs-588000/config.json ...
	I0520 03:44:49.768084    9427 start.go:360] acquireMachinesLock for embed-certs-588000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:49.768114    9427 start.go:364] duration metric: took 23.458µs to acquireMachinesLock for "embed-certs-588000"
	I0520 03:44:49.768124    9427 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:49.768130    9427 fix.go:54] fixHost starting: 
	I0520 03:44:49.768244    9427 fix.go:112] recreateIfNeeded on embed-certs-588000: state=Stopped err=<nil>
	W0520 03:44:49.768257    9427 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:49.775545    9427 out.go:177] * Restarting existing qemu2 VM for "embed-certs-588000" ...
	I0520 03:44:49.779626    9427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:b5:a8:48:77:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:49.781647    9427 main.go:141] libmachine: STDOUT: 
	I0520 03:44:49.781671    9427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:49.781699    9427 fix.go:56] duration metric: took 13.569708ms for fixHost
	I0520 03:44:49.781703    9427 start.go:83] releasing machines lock for "embed-certs-588000", held for 13.585292ms
	W0520 03:44:49.781710    9427 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:49.781744    9427 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:49.781748    9427 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:54.783405    9427 start.go:360] acquireMachinesLock for embed-certs-588000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:54.783884    9427 start.go:364] duration metric: took 330.291µs to acquireMachinesLock for "embed-certs-588000"
	I0520 03:44:54.784076    9427 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:54.784101    9427 fix.go:54] fixHost starting: 
	I0520 03:44:54.784956    9427 fix.go:112] recreateIfNeeded on embed-certs-588000: state=Stopped err=<nil>
	W0520 03:44:54.784988    9427 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:54.789413    9427 out.go:177] * Restarting existing qemu2 VM for "embed-certs-588000" ...
	I0520 03:44:54.797566    9427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:b5:a8:48:77:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/embed-certs-588000/disk.qcow2
	I0520 03:44:54.806888    9427 main.go:141] libmachine: STDOUT: 
	I0520 03:44:54.806952    9427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:54.807016    9427 fix.go:56] duration metric: took 22.919417ms for fixHost
	I0520 03:44:54.807031    9427 start.go:83] releasing machines lock for "embed-certs-588000", held for 23.086459ms
	W0520 03:44:54.807189    9427 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-588000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-588000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:54.812326    9427 out.go:177] 
	W0520 03:44:54.816421    9427 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:54.816444    9427 out.go:239] * 
	* 
	W0520 03:44:54.819192    9427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:54.825356    9427 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-588000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (64.284792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-351000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-351000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (6.272369042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-351000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-351000" primary control-plane node in "default-k8s-diff-port-351000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-351000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-351000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:51.772661    9448 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:51.772789    9448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:51.772793    9448 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:51.772796    9448 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:51.772931    9448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:51.773968    9448 out.go:298] Setting JSON to false
	I0520 03:44:51.790297    9448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6262,"bootTime":1716195629,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:51.790370    9448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:51.794422    9448 out.go:177] * [default-k8s-diff-port-351000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:51.801435    9448 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:51.801480    9448 notify.go:220] Checking for updates...
	I0520 03:44:51.805436    9448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:51.808378    9448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:51.811395    9448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:51.814329    9448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:51.817424    9448 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:51.820762    9448 config.go:182] Loaded profile config "default-k8s-diff-port-351000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:51.821033    9448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:51.825351    9448 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:44:51.832415    9448 start.go:297] selected driver: qemu2
	I0520 03:44:51.832422    9448 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-351000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-351000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:51.832483    9448 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:51.834745    9448 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 03:44:51.834766    9448 cni.go:84] Creating CNI manager for ""
	I0520 03:44:51.834772    9448 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:51.834793    9448 start.go:340] cluster config:
	{Name:default-k8s-diff-port-351000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-351000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:51.838755    9448 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:51.846397    9448 out.go:177] * Starting "default-k8s-diff-port-351000" primary control-plane node in "default-k8s-diff-port-351000" cluster
	I0520 03:44:51.850341    9448 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:51.850353    9448 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:44:51.850361    9448 cache.go:56] Caching tarball of preloaded images
	I0520 03:44:51.850403    9448 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:44:51.850408    9448 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:44:51.850460    9448 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/default-k8s-diff-port-351000/config.json ...
	I0520 03:44:51.850901    9448 start.go:360] acquireMachinesLock for default-k8s-diff-port-351000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:51.850937    9448 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "default-k8s-diff-port-351000"
	I0520 03:44:51.850947    9448 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:51.850952    9448 fix.go:54] fixHost starting: 
	I0520 03:44:51.851061    9448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-351000: state=Stopped err=<nil>
	W0520 03:44:51.851069    9448 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:51.855457    9448 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-351000" ...
	I0520 03:44:51.863388    9448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c4:54:76:7f:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:51.865300    9448 main.go:141] libmachine: STDOUT: 
	I0520 03:44:51.865320    9448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:51.865348    9448 fix.go:56] duration metric: took 14.395334ms for fixHost
	I0520 03:44:51.865351    9448 start.go:83] releasing machines lock for "default-k8s-diff-port-351000", held for 14.410542ms
	W0520 03:44:51.865357    9448 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:51.865385    9448 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:51.865389    9448 start.go:728] Will try again in 5 seconds ...
	I0520 03:44:56.867533    9448 start.go:360] acquireMachinesLock for default-k8s-diff-port-351000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:57.941660    9448 start.go:364] duration metric: took 1.074016s to acquireMachinesLock for "default-k8s-diff-port-351000"
	I0520 03:44:57.941778    9448 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:44:57.941794    9448 fix.go:54] fixHost starting: 
	I0520 03:44:57.942598    9448 fix.go:112] recreateIfNeeded on default-k8s-diff-port-351000: state=Stopped err=<nil>
	W0520 03:44:57.942625    9448 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:44:57.948258    9448 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-351000" ...
	I0520 03:44:57.965436    9448 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c4:54:76:7f:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/default-k8s-diff-port-351000/disk.qcow2
	I0520 03:44:57.975487    9448 main.go:141] libmachine: STDOUT: 
	I0520 03:44:57.975556    9448 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:57.975630    9448 fix.go:56] duration metric: took 33.837916ms for fixHost
	I0520 03:44:57.975647    9448 start.go:83] releasing machines lock for "default-k8s-diff-port-351000", held for 33.920875ms
	W0520 03:44:57.975813    9448 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-351000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-351000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:57.983190    9448 out.go:177] 
	W0520 03:44:57.987266    9448 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:44:57.987309    9448 out.go:239] * 
	* 
	W0520 03:44:57.989297    9448 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:44:58.000193    9448 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-351000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (60.680375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-588000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (31.301667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-588000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-588000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-588000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.564833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-588000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-588000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (28.2005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-588000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (27.8725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-588000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-588000 --alsologtostderr -v=1: exit status 83 (40.588458ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-588000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-588000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:55.084505    9467 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:55.084659    9467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:55.084662    9467 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:55.084665    9467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:55.084792    9467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:55.085004    9467 out.go:298] Setting JSON to false
	I0520 03:44:55.085010    9467 mustload.go:65] Loading cluster: embed-certs-588000
	I0520 03:44:55.085217    9467 config.go:182] Loaded profile config "embed-certs-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:55.090044    9467 out.go:177] * The control-plane node embed-certs-588000 host is not running: state=Stopped
	I0520 03:44:55.093941    9467 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-588000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-588000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (27.948333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (28.078834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-588000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-246000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-246000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.900799291s)

                                                
                                                
-- stdout --
	* [newest-cni-246000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-246000" primary control-plane node in "newest-cni-246000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:55.537448    9490 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:55.537594    9490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:55.537597    9490 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:55.537599    9490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:55.537722    9490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:55.538834    9490 out.go:298] Setting JSON to false
	I0520 03:44:55.554738    9490 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6266,"bootTime":1716195629,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:44:55.554824    9490 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:44:55.559461    9490 out.go:177] * [newest-cni-246000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:44:55.566415    9490 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:44:55.570231    9490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:44:55.566478    9490 notify.go:220] Checking for updates...
	I0520 03:44:55.573311    9490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:44:55.576354    9490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:44:55.579366    9490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:44:55.582384    9490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:44:55.585688    9490 config.go:182] Loaded profile config "default-k8s-diff-port-351000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:55.585753    9490 config.go:182] Loaded profile config "multinode-312000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:55.585808    9490 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:44:55.590348    9490 out.go:177] * Using the qemu2 driver based on user configuration
	I0520 03:44:55.597345    9490 start.go:297] selected driver: qemu2
	I0520 03:44:55.597355    9490 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:44:55.597362    9490 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:44:55.599569    9490 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0520 03:44:55.599607    9490 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0520 03:44:55.607319    9490 out.go:177] * Automatically selected the socket_vmnet network
	I0520 03:44:55.610485    9490 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 03:44:55.610502    9490 cni.go:84] Creating CNI manager for ""
	I0520 03:44:55.610509    9490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:44:55.610513    9490 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:44:55.610559    9490 start.go:340] cluster config:
	{Name:newest-cni-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:44:55.614933    9490 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:44:55.622365    9490 out.go:177] * Starting "newest-cni-246000" primary control-plane node in "newest-cni-246000" cluster
	I0520 03:44:55.626392    9490 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:44:55.626409    9490 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:44:55.626417    9490 cache.go:56] Caching tarball of preloaded images
	I0520 03:44:55.626483    9490 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:44:55.626490    9490 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:44:55.626553    9490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/newest-cni-246000/config.json ...
	I0520 03:44:55.626572    9490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/newest-cni-246000/config.json: {Name:mkb7c5b078ef1b585d0a4d4177522cc47b03caff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:44:55.626801    9490 start.go:360] acquireMachinesLock for newest-cni-246000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:44:55.626835    9490 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "newest-cni-246000"
	I0520 03:44:55.626848    9490 start.go:93] Provisioning new machine with config: &{Name:newest-cni-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:44:55.626880    9490 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:44:55.634370    9490 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:44:55.651270    9490 start.go:159] libmachine.API.Create for "newest-cni-246000" (driver="qemu2")
	I0520 03:44:55.651299    9490 client.go:168] LocalClient.Create starting
	I0520 03:44:55.651359    9490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:44:55.651392    9490 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:55.651401    9490 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:55.651439    9490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:44:55.651463    9490 main.go:141] libmachine: Decoding PEM data...
	I0520 03:44:55.651472    9490 main.go:141] libmachine: Parsing certificate...
	I0520 03:44:55.651835    9490 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:44:55.788023    9490 main.go:141] libmachine: Creating SSH key...
	I0520 03:44:55.913860    9490 main.go:141] libmachine: Creating Disk image...
	I0520 03:44:55.913865    9490 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:44:55.914031    9490 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:44:55.926649    9490 main.go:141] libmachine: STDOUT: 
	I0520 03:44:55.926673    9490 main.go:141] libmachine: STDERR: 
	I0520 03:44:55.926728    9490 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2 +20000M
	I0520 03:44:55.937518    9490 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:44:55.937533    9490 main.go:141] libmachine: STDERR: 
	I0520 03:44:55.937559    9490 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:44:55.937565    9490 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:44:55.937594    9490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:e5:8c:1d:dd:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:44:55.939278    9490 main.go:141] libmachine: STDOUT: 
	I0520 03:44:55.939293    9490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:44:55.939314    9490 client.go:171] duration metric: took 288.015ms to LocalClient.Create
	I0520 03:44:57.941468    9490 start.go:128] duration metric: took 2.314601959s to createHost
	I0520 03:44:57.941523    9490 start.go:83] releasing machines lock for "newest-cni-246000", held for 2.31471625s
	W0520 03:44:57.941597    9490 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:57.961238    9490 out.go:177] * Deleting "newest-cni-246000" in qemu2 ...
	W0520 03:44:58.011441    9490 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:44:58.011480    9490 start.go:728] Will try again in 5 seconds ...
	I0520 03:45:03.013606    9490 start.go:360] acquireMachinesLock for newest-cni-246000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:45:03.014087    9490 start.go:364] duration metric: took 394.542µs to acquireMachinesLock for "newest-cni-246000"
	I0520 03:45:03.014207    9490 start.go:93] Provisioning new machine with config: &{Name:newest-cni-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 03:45:03.014475    9490 start.go:125] createHost starting for "" (driver="qemu2")
	I0520 03:45:03.020013    9490 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 03:45:03.068835    9490 start.go:159] libmachine.API.Create for "newest-cni-246000" (driver="qemu2")
	I0520 03:45:03.068909    9490 client.go:168] LocalClient.Create starting
	I0520 03:45:03.069054    9490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/ca.pem
	I0520 03:45:03.069118    9490 main.go:141] libmachine: Decoding PEM data...
	I0520 03:45:03.069135    9490 main.go:141] libmachine: Parsing certificate...
	I0520 03:45:03.069202    9490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18925-5286/.minikube/certs/cert.pem
	I0520 03:45:03.069249    9490 main.go:141] libmachine: Decoding PEM data...
	I0520 03:45:03.069260    9490 main.go:141] libmachine: Parsing certificate...
	I0520 03:45:03.070086    9490 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso...
	I0520 03:45:03.218428    9490 main.go:141] libmachine: Creating SSH key...
	I0520 03:45:03.338335    9490 main.go:141] libmachine: Creating Disk image...
	I0520 03:45:03.338341    9490 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0520 03:45:03.338518    9490 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:45:03.351180    9490 main.go:141] libmachine: STDOUT: 
	I0520 03:45:03.351201    9490 main.go:141] libmachine: STDERR: 
	I0520 03:45:03.351257    9490 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2 +20000M
	I0520 03:45:03.362312    9490 main.go:141] libmachine: STDOUT: Image resized.
	
	I0520 03:45:03.362333    9490 main.go:141] libmachine: STDERR: 
	I0520 03:45:03.362350    9490 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:45:03.362355    9490 main.go:141] libmachine: Starting QEMU VM...
	I0520 03:45:03.362393    9490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:0f:64:23:02:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:45:03.364123    9490 main.go:141] libmachine: STDOUT: 
	I0520 03:45:03.364137    9490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:45:03.364149    9490 client.go:171] duration metric: took 295.230041ms to LocalClient.Create
	I0520 03:45:05.366371    9490 start.go:128] duration metric: took 2.3518525s to createHost
	I0520 03:45:05.366461    9490 start.go:83] releasing machines lock for "newest-cni-246000", held for 2.352387583s
	W0520 03:45:05.366877    9490 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:45:05.380590    9490 out.go:177] 
	W0520 03:45:05.384692    9490 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:45:05.384746    9490 out.go:239] * 
	* 
	W0520 03:45:05.387451    9490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:45:05.401574    9490 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-246000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000: exit status 7 (67.685875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-351000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (30.376625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-351000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-351000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-351000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.897333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-351000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-351000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (27.962375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-351000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (27.332208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-351000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-351000 --alsologtostderr -v=1: exit status 83 (39.276125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-351000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-351000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:44:58.253792    9515 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:44:58.253922    9515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:58.253926    9515 out.go:304] Setting ErrFile to fd 2...
	I0520 03:44:58.253928    9515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:44:58.254049    9515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:44:58.254253    9515 out.go:298] Setting JSON to false
	I0520 03:44:58.254260    9515 mustload.go:65] Loading cluster: default-k8s-diff-port-351000
	I0520 03:44:58.254452    9515 config.go:182] Loaded profile config "default-k8s-diff-port-351000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:44:58.258881    9515 out.go:177] * The control-plane node default-k8s-diff-port-351000 host is not running: state=Stopped
	I0520 03:44:58.263928    9515 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-351000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-351000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (26.95675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (27.072125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-351000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-246000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-246000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.185284459s)

                                                
                                                
-- stdout --
	* [newest-cni-246000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-246000" primary control-plane node in "newest-cni-246000" cluster
	* Restarting existing qemu2 VM for "newest-cni-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:45:08.822993    9568 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:45:08.823140    9568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:45:08.823144    9568 out.go:304] Setting ErrFile to fd 2...
	I0520 03:45:08.823146    9568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:45:08.823275    9568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:45:08.824223    9568 out.go:298] Setting JSON to false
	I0520 03:45:08.840168    9568 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6279,"bootTime":1716195629,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:45:08.840231    9568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:45:08.845287    9568 out.go:177] * [newest-cni-246000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:45:08.855221    9568 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:45:08.859049    9568 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:45:08.855257    9568 notify.go:220] Checking for updates...
	I0520 03:45:08.865171    9568 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:45:08.868207    9568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:45:08.871118    9568 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:45:08.874244    9568 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:45:08.877516    9568 config.go:182] Loaded profile config "newest-cni-246000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:45:08.877770    9568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:45:08.882162    9568 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:45:08.889231    9568 start.go:297] selected driver: qemu2
	I0520 03:45:08.889240    9568 start.go:901] validating driver "qemu2" against &{Name:newest-cni-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:newest-cni-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:45:08.889301    9568 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:45:08.891607    9568 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 03:45:08.891632    9568 cni.go:84] Creating CNI manager for ""
	I0520 03:45:08.891640    9568 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:45:08.891659    9568 start.go:340] cluster config:
	{Name:newest-cni-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-246000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:45:08.896088    9568 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:45:08.903175    9568 out.go:177] * Starting "newest-cni-246000" primary control-plane node in "newest-cni-246000" cluster
	I0520 03:45:08.907151    9568 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:45:08.907167    9568 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:45:08.907180    9568 cache.go:56] Caching tarball of preloaded images
	I0520 03:45:08.907246    9568 preload.go:173] Found /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 03:45:08.907252    9568 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:45:08.907319    9568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/newest-cni-246000/config.json ...
	I0520 03:45:08.907719    9568 start.go:360] acquireMachinesLock for newest-cni-246000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:45:08.907749    9568 start.go:364] duration metric: took 23.917µs to acquireMachinesLock for "newest-cni-246000"
	I0520 03:45:08.907760    9568 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:45:08.907766    9568 fix.go:54] fixHost starting: 
	I0520 03:45:08.907879    9568 fix.go:112] recreateIfNeeded on newest-cni-246000: state=Stopped err=<nil>
	W0520 03:45:08.907887    9568 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:45:08.912211    9568 out.go:177] * Restarting existing qemu2 VM for "newest-cni-246000" ...
	I0520 03:45:08.920196    9568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:0f:64:23:02:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:45:08.922286    9568 main.go:141] libmachine: STDOUT: 
	I0520 03:45:08.922309    9568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:45:08.922337    9568 fix.go:56] duration metric: took 14.571875ms for fixHost
	I0520 03:45:08.922341    9568 start.go:83] releasing machines lock for "newest-cni-246000", held for 14.586625ms
	W0520 03:45:08.922347    9568 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:45:08.922378    9568 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:45:08.922382    9568 start.go:728] Will try again in 5 seconds ...
	I0520 03:45:13.924467    9568 start.go:360] acquireMachinesLock for newest-cni-246000: {Name:mk6832261fc6fad95350f503f86ff5dc44a18b5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 03:45:13.924884    9568 start.go:364] duration metric: took 263.625µs to acquireMachinesLock for "newest-cni-246000"
	I0520 03:45:13.925023    9568 start.go:96] Skipping create...Using existing machine configuration
	I0520 03:45:13.925043    9568 fix.go:54] fixHost starting: 
	I0520 03:45:13.925726    9568 fix.go:112] recreateIfNeeded on newest-cni-246000: state=Stopped err=<nil>
	W0520 03:45:13.925752    9568 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 03:45:13.929140    9568 out.go:177] * Restarting existing qemu2 VM for "newest-cni-246000" ...
	I0520 03:45:13.936433    9568 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:0f:64:23:02:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18925-5286/.minikube/machines/newest-cni-246000/disk.qcow2
	I0520 03:45:13.945434    9568 main.go:141] libmachine: STDOUT: 
	I0520 03:45:13.945498    9568 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0520 03:45:13.945576    9568 fix.go:56] duration metric: took 20.535292ms for fixHost
	I0520 03:45:13.945590    9568 start.go:83] releasing machines lock for "newest-cni-246000", held for 20.6835ms
	W0520 03:45:13.945764    9568 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0520 03:45:13.952164    9568 out.go:177] 
	W0520 03:45:13.956151    9568 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0520 03:45:13.956175    9568 out.go:239] * 
	* 
	W0520 03:45:13.958820    9568 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:45:13.967059    9568 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-246000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000: exit status 7 (66.102583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-246000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000: exit status 7 (28.856084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-246000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-246000 --alsologtostderr -v=1: exit status 83 (41.502375ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-246000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-246000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:45:14.148093    9582 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:45:14.148236    9582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:45:14.148239    9582 out.go:304] Setting ErrFile to fd 2...
	I0520 03:45:14.148242    9582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:45:14.148353    9582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:45:14.148586    9582 out.go:298] Setting JSON to false
	I0520 03:45:14.148590    9582 mustload.go:65] Loading cluster: newest-cni-246000
	I0520 03:45:14.148776    9582 config.go:182] Loaded profile config "newest-cni-246000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:45:14.153047    9582 out.go:177] * The control-plane node newest-cni-246000 host is not running: state=Stopped
	I0520 03:45:14.157049    9582 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-246000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-246000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000: exit status 7 (28.854166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-246000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000: exit status 7 (29.191084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.1/json-events 10.12
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.07
18 TestDownloadOnly/v1.30.1/DeleteAll 0.23
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.32
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
35 TestHyperKitDriverInstallOrUpdate 9.98
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 6.99
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.7
55 TestFunctional/serial/CacheCmd/cache/add_local 1.16
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.11
93 TestFunctional/parallel/License 0.21
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.29
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.12
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 2.93
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.08
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.44
258 TestNoKubernetes/serial/Stop 3.4
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
275 TestStartStop/group/old-k8s-version/serial/Stop 3.47
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
286 TestStartStop/group/no-preload/serial/Stop 3.32
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 3.72
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.46
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.11
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.13
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-699000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-699000: exit status 85 (93.216417ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |          |
	|         | -p download-only-699000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:19:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:19:00.583330    5824 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:19:00.583477    5824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:00.583480    5824 out.go:304] Setting ErrFile to fd 2...
	I0520 03:19:00.583482    5824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:00.583595    5824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	W0520 03:19:00.583666    5824 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18925-5286/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18925-5286/.minikube/config/config.json: no such file or directory
	I0520 03:19:00.584870    5824 out.go:298] Setting JSON to true
	I0520 03:19:00.601983    5824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4711,"bootTime":1716195629,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:19:00.602060    5824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:19:00.615214    5824 out.go:97] [download-only-699000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:19:00.617536    5824 out.go:169] MINIKUBE_LOCATION=18925
	W0520 03:19:00.615390    5824 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 03:19:00.615404    5824 notify.go:220] Checking for updates...
	I0520 03:19:00.646351    5824 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:19:00.650172    5824 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:19:00.654198    5824 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:19:00.658274    5824 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	W0520 03:19:00.665197    5824 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 03:19:00.665481    5824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:19:00.668153    5824 out.go:97] Using the qemu2 driver based on user configuration
	I0520 03:19:00.668174    5824 start.go:297] selected driver: qemu2
	I0520 03:19:00.668189    5824 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:19:00.668244    5824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:19:00.671161    5824 out.go:169] Automatically selected the socket_vmnet network
	I0520 03:19:00.676802    5824 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 03:19:00.676903    5824 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:19:00.676935    5824 cni.go:84] Creating CNI manager for ""
	I0520 03:19:00.676955    5824 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 03:19:00.677023    5824 start.go:340] cluster config:
	{Name:download-only-699000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:19:00.682497    5824 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:19:00.687158    5824 out.go:97] Downloading VM boot image ...
	I0520 03:19:00.687174    5824 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/iso/arm64/minikube-v1.33.1-1715594774-18869-arm64.iso
	I0520 03:19:05.309957    5824 out.go:97] Starting "download-only-699000" primary control-plane node in "download-only-699000" cluster
	I0520 03:19:05.309988    5824 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:19:05.365830    5824 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 03:19:05.365838    5824 cache.go:56] Caching tarball of preloaded images
	I0520 03:19:05.365975    5824 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:19:05.370020    5824 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 03:19:05.370027    5824 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:05.446701    5824 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 03:19:10.772670    5824 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:10.772842    5824 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:11.470207    5824 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 03:19:11.470415    5824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/download-only-699000/config.json ...
	I0520 03:19:11.470431    5824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/download-only-699000/config.json: {Name:mk263d35c0fcf02cbcc8f112bd2baeb0331f01ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:19:11.470659    5824 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 03:19:11.470844    5824 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0520 03:19:11.940605    5824 out.go:169] 
	W0520 03:19:11.947616    5824 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380 0x108a6d380] Decompressors:map[bz2:0x140009152d0 gz:0x140009152d8 tar:0x14000915280 tar.bz2:0x14000915290 tar.gz:0x140009152a0 tar.xz:0x140009152b0 tar.zst:0x140009152c0 tbz2:0x14000915290 tgz:0x140009152a0 txz:0x140009152b0 tzst:0x140009152c0 xz:0x140009152e0 zip:0x140009152f0 zst:0x140009152e8] Getters:map[file:0x140007dab50 http:0x1400089a190 https:0x1400089a1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0520 03:19:11.947647    5824 out_reason.go:110] 
	W0520 03:19:11.953592    5824 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 03:19:11.957372    5824 out.go:169] 
	
	
	* The control-plane node download-only-699000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-699000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-699000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (10.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-911000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-911000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 : (10.119269583s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (10.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-911000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-911000: exit status 85 (72.159458ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | -p download-only-699000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| delete  | -p download-only-699000        | download-only-699000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT | 20 May 24 03:19 PDT |
	| start   | -o=json --download-only        | download-only-911000 | jenkins | v1.33.1 | 20 May 24 03:19 PDT |                     |
	|         | -p download-only-911000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 03:19:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 03:19:12.615093    5863 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:19:12.615226    5863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:12.615229    5863 out.go:304] Setting ErrFile to fd 2...
	I0520 03:19:12.615231    5863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:19:12.615365    5863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:19:12.616460    5863 out.go:298] Setting JSON to true
	I0520 03:19:12.632403    5863 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4723,"bootTime":1716195629,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:19:12.632470    5863 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:19:12.637001    5863 out.go:97] [download-only-911000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:19:12.641078    5863 out.go:169] MINIKUBE_LOCATION=18925
	I0520 03:19:12.637103    5863 notify.go:220] Checking for updates...
	I0520 03:19:12.647999    5863 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:19:12.651115    5863 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:19:12.654021    5863 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:19:12.657053    5863 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	W0520 03:19:12.662971    5863 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 03:19:12.663168    5863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:19:12.665978    5863 out.go:97] Using the qemu2 driver based on user configuration
	I0520 03:19:12.665988    5863 start.go:297] selected driver: qemu2
	I0520 03:19:12.665992    5863 start.go:901] validating driver "qemu2" against <nil>
	I0520 03:19:12.666042    5863 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 03:19:12.668993    5863 out.go:169] Automatically selected the socket_vmnet network
	I0520 03:19:12.674099    5863 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0520 03:19:12.674199    5863 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 03:19:12.674215    5863 cni.go:84] Creating CNI manager for ""
	I0520 03:19:12.674223    5863 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 03:19:12.674230    5863 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 03:19:12.674267    5863 start.go:340] cluster config:
	{Name:download-only-911000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:19:12.678462    5863 iso.go:125] acquiring lock: {Name:mkb3194b378c7021103193e39bcd164dabdcceab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 03:19:12.681032    5863 out.go:97] Starting "download-only-911000" primary control-plane node in "download-only-911000" cluster
	I0520 03:19:12.681040    5863 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:19:12.731372    5863 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:19:12.731383    5863 cache.go:56] Caching tarball of preloaded images
	I0520 03:19:12.731519    5863 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:19:12.735050    5863 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 03:19:12.735056    5863 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:12.806058    5863 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 03:19:16.992017    5863 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:16.992173    5863 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 03:19:17.535186    5863 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 03:19:17.535394    5863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/download-only-911000/config.json ...
	I0520 03:19:17.535410    5863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18925-5286/.minikube/profiles/download-only-911000/config.json: {Name:mk3ab59e559a13006c19f175c50abe20da13ae0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 03:19:17.535966    5863 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 03:19:17.536088    5863 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18925-5286/.minikube/cache/darwin/arm64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-911000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-911000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-911000
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.32s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-980000 --alsologtostderr --binary-mirror http://127.0.0.1:50846 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-980000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-980000
--- PASS: TestBinaryMirror (0.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-091000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-091000: exit status 85 (58.513334ms)

                                                
                                                
-- stdout --
	* Profile "addons-091000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-091000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-091000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-091000: exit status 85 (54.320334ms)

                                                
                                                
-- stdout --
	* Profile "addons-091000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-091000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.98s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status: exit status 7 (31.333833ms)

                                                
                                                
-- stdout --
	nospam-498000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status: exit status 7 (28.928791ms)

                                                
                                                
-- stdout --
	nospam-498000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status: exit status 7 (29.058125ms)

                                                
                                                
-- stdout --
	nospam-498000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause: exit status 83 (38.613042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-498000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause: exit status 83 (38.997625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-498000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause: exit status 83 (38.69975ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-498000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause: exit status 83 (38.140542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-498000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause: exit status 83 (37.638459ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-498000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause: exit status 83 (39.731292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-498000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-498000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (6.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 stop: (3.050633625s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 stop: (1.951581459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-498000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-498000 stop: (1.985291292s)
--- PASS: TestErrorSpam/stop (6.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18925-5286/.minikube/files/etc/test/nested/copy/5818/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local79136793/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cache add minikube-local-cache-test:functional-357000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 cache delete minikube-local-cache-test:functional-357000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-357000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 config get cpus: exit status 14 (28.60025ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 config get cpus: exit status 14 (29.002292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-357000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-357000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (165.9215ms)

                                                
                                                
-- stdout --
	* [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:21:05.203241    6472 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:21:05.203412    6472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.203417    6472 out.go:304] Setting ErrFile to fd 2...
	I0520 03:21:05.203421    6472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.203587    6472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:21:05.205023    6472 out.go:298] Setting JSON to false
	I0520 03:21:05.226338    6472 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4836,"bootTime":1716195629,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:21:05.226401    6472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:21:05.232035    6472 out.go:177] * [functional-357000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0520 03:21:05.239968    6472 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:21:05.239996    6472 notify.go:220] Checking for updates...
	I0520 03:21:05.246899    6472 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:21:05.249909    6472 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:21:05.252975    6472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:21:05.255890    6472 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:21:05.258913    6472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:21:05.266345    6472 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:21:05.266697    6472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:21:05.269880    6472 out.go:177] * Using the qemu2 driver based on existing profile
	I0520 03:21:05.276925    6472 start.go:297] selected driver: qemu2
	I0520 03:21:05.276936    6472 start.go:901] validating driver "qemu2" against &{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:21:05.276989    6472 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:21:05.282903    6472 out.go:177] 
	W0520 03:21:05.286880    6472 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 03:21:05.290939    6472 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-357000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-357000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-357000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (105.898667ms)

                                                
                                                
-- stdout --
	* [functional-357000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 03:21:05.435852    6483 out.go:291] Setting OutFile to fd 1 ...
	I0520 03:21:05.435968    6483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.435971    6483 out.go:304] Setting ErrFile to fd 2...
	I0520 03:21:05.435973    6483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 03:21:05.436111    6483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18925-5286/.minikube/bin
	I0520 03:21:05.437533    6483 out.go:298] Setting JSON to false
	I0520 03:21:05.454033    6483 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4836,"bootTime":1716195629,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0520 03:21:05.454113    6483 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0520 03:21:05.458951    6483 out.go:177] * [functional-357000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0520 03:21:05.465933    6483 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 03:21:05.466009    6483 notify.go:220] Checking for updates...
	I0520 03:21:05.469973    6483 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	I0520 03:21:05.472911    6483 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0520 03:21:05.475976    6483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 03:21:05.478902    6483 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	I0520 03:21:05.481896    6483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 03:21:05.485203    6483 config.go:182] Loaded profile config "functional-357000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 03:21:05.485474    6483 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 03:21:05.489968    6483 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0520 03:21:05.496943    6483 start.go:297] selected driver: qemu2
	I0520 03:21:05.496952    6483 start.go:901] validating driver "qemu2" against &{Name:functional-357000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 03:21:05.497028    6483 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 03:21:05.502902    6483 out.go:177] 
	W0520 03:21:05.506967    6483 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 03:21:05.510918    6483 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.247696917s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-357000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image rm gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-357000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 image save --daemon gcr.io/google-containers/addon-resizer:functional-357000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-357000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "67.495333ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.548083ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "68.799041ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.781292ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013241s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-357000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-357000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-357000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-357000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-469000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-469000 --output=json --user=testUser: (2.931923667s)
--- PASS: TestJSONOutput/stop/Command (2.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-791000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-791000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.404542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e37ae09a-971b-4d34-98b5-189ff988bb8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-791000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"acd43096-4400-4d19-97d8-e064749a1979","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"4cacf7fb-8eca-4f59-b3cc-d451f735d7a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig"}}
	{"specversion":"1.0","id":"8cf95627-f731-46f0-93a8-05b2fdfd1282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d4494766-8b1e-48c6-bf77-4697339fa6b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3085a7c4-fe87-4b0f-8c27-02560d048241","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube"}}
	{"specversion":"1.0","id":"fe0749c4-cfa4-46bd-a254-69a9bd566fc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5617b37b-bb8c-47a5-8d1d-1a0b3e727e3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-791000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-791000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-727000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.64425ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-727000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18925-5286/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18925-5286/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-727000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-727000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.508666ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-727000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-727000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.61214625s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.826823041s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-727000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-727000: (3.400145709s)
--- PASS: TestNoKubernetes/serial/Stop (3.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-727000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-727000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.678584ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-727000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-727000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-555000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-215000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-215000 --alsologtostderr -v=3: (3.467840625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-215000 -n old-k8s-version-215000: exit status 7 (52.779792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-215000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-828000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-828000 --alsologtostderr -v=3: (3.322101292s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-828000 -n no-preload-828000: exit status 7 (53.874042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-828000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-588000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-588000 --alsologtostderr -v=3: (3.715756459s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-351000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-351000 --alsologtostderr -v=3: (3.460636209s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-588000 -n embed-certs-588000: exit status 7 (54.695375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-588000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-351000 -n default-k8s-diff-port-351000: exit status 7 (34.65375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-351000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-246000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-246000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-246000 --alsologtostderr -v=3: (3.13356025s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-246000 -n newest-cni-246000: exit status 7 (57.484416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-246000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2098307826/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716200426182500000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2098307826/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716200426182500000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2098307826/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716200426182500000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2098307826/001/test-1716200426182500000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (54.26225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.569916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.063667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.079375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.914542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.182792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.870667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.200334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo umount -f /mount-9p": exit status 83 (45.656875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2098307826/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2495584971/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (60.896541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.332625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.469791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.048ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (79.447125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.185458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.371292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "sudo umount -f /mount-9p": exit status 83 (45.657791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-357000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2495584971/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2553247714/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2553247714/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2553247714/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1: exit status 83 (79.858458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1: exit status 83 (82.518458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1: exit status 83 (85.441708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1: exit status 83 (85.563709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1: exit status 83 (88.086167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1: exit status 83 (84.71125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-357000 ssh "findmnt -T" /mount1: exit status 83 (89.03275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-357000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-357000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2553247714/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2553247714/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-357000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2553247714/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-225000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-225000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-225000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-225000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-225000"

                                                
                                                
----------------------- debugLogs end: cilium-225000 [took: 2.142199292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-225000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-225000
--- SKIP: TestNetworkPlugins/group/cilium (2.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-766000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard